AI augmentation of writing indicates changes to education and the workplace

L to R: Laura Dumin, Jared Miller, and Alan Herzberger. (SAM ROYKA/THE VISTA)

Last week’s Fusion Conference for UCO media students sparked conversations about how employees might soon be expected to use artificial intelligence on the job. 

Laura Dumin, AI coordinator and English professor at UCO, spoke at a panel discussion titled “AI on the Job” and predicted a future in which “AI augmentation of your writing, not AI replacing your writing” is more likely.

“I’m sure that you all have professors on campus who are telling you, ‘You can’t use AI, don’t use it. It’s bad, it’s cheating.’ But the reality is that when you get out into the workforce, your boss is going to expect you to know how to use these tools, because they are tools. And they’re going to expect you to know how to use them responsibly and ethically, and transparently where it matters,” Dumin said. 


Exact ethical guidelines for use of AI are still under construction, but the White House published a document titled “Blueprint for an AI Bill of Rights” and there are three AI-related bills currently proposed in Oklahoma. 

“It was a lifelong goal to see journalism enter the digital age in a good way that helps the public and journalists, though I don’t have a lot of faith in the immediate future, so now it’s just before I die,” said panelist Alan Herzberger, who worked in journalism for two decades and now is a vice president at Koch Communications. The room erupted in laughter, and he added a more serious answer.

“The market has to demand change, then the industries will change,” Herzberger said.

Dumin said AI can be used like a personal assistant to help the user be more productive, rather than replace their original work.

“AI is kind of like a calculator for words, in some ways. And if you know, the foundation of writing, just like with math, if you know what a calculator is doing behind the scenes, it’s why you can’t use a calculator until you get to a certain age,” said panelist Jared Miller, vice president of growth marketing at Konnect Agency.

Writing is a skill you can lose if you don’t practice it, Miller said. 

Many of the useful pieces of AI for writers echo these skills, but they can also be used in a proactive and transparent manner.

“In education, here for you all, the things that probably matter right now are things like drafting, outlining, ideation, that sort of thing,” Dumin said. “AI chatbots and generation engines can do these things and more, but they also make mistakes. When an AI generates incorrect information, this is referred to as a ‘hallucination.’”

In the multitude of AI softwares now available, there is one in particular for use in academia called Connected Papers. 

Connected Papers is a tool that creates a visual map of similar papers. One good thing about this one is that it will not hallucinate. This AI is using keywords and other connectors to bring you related information all created by real humans. 

It is an organizational tool that can be likened to a search engine algorithm that optimizes results, for anyone who has ever clicked the “sort by best match” option.

As AI continues to evolve and integrate into daily life, Dumin believes there is a balance between the hesitation that many in the industry feel toward the use of AI as a tool.

Within journalism, Dumin said, “We have seen a few missteps over the last year, places like the Washington Post and Sports Illustrated having AI write copy for them, there was not a human in the loop. And then we had problems.”

The AI-generated article in Sports Illustrated about getting into Ultimate Frisbee included telling the reader that they did indeed first need a frisbee. This note shows another point: that humans have a common sense filter where AI does not.

“And so these things had to be retracted,” Dumin said. “And I think that’s one of the things that is kind of concerning. I have a cousin who is a journalist as well, and she’s sitting here going, ‘Am I still gonna have a job in a couple of years because of AI?’ But I think that’s also a space where we can have, again, humans in the loop, so AI can do some of the early drafting, and then we come back in and fix it, make sure that it’s real.”

Dumin said that people are often afraid of new technology. 

In his “Bibliotheca Universalis,” (1545) Conrad Gesner reacted to the invention of the printing press with fears that the “confusing and harmful abundance of books” would cause an information overload. 

Today, information consumption has sped up to scrolling through seven-second videos on TikTok, often for hours at a time. Psychologist Mihaly Csikszentmihalyi and later engineer Robert Lucky each estimated the human brain’s processing capacity at 120 bits per second. 

Since the human brain is not a computer and fundamentally functions in an organically electrical manner, the metaphor is not one-to-one. However, for the purposes of describing information processing speeds, it is useful, if imperfect.

Meanwhile, data scientist Pragya Agarwal estimates that the brain processes 11 million bits unconsciously, while the conscious mind has a speed closer to 50 bits per second.

Human behavior dictates that the brain wants to find ways to process faster, to save us from the threats like wild cheetahs chasing after our ancestors and more quickly identify which berries are safe versus poisonous. This is evidenced in ‘shortcuts’ like unconscious biases and the tendency to fit stimuli from our environment into quick-reference categories. 

Nuance is an example of something that takes conscious effort and brainpower to understand because the brain cannot rely on a shortcut to understand it.  

Since technology now functions much faster than a human brain, humans retain less information and are instead remembering how to find it, according to a study called “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.”

Share This