DEV Community

Cover image for Why you should not be using AI as a writing assistant
Oscar
Oscar

Posted on

Why you should not be using AI as a writing assistant

TLDR: Who wants to read something written by a bot? People want to read what you wrote, not what a bot did. Also, these seemingly helpful AI tools tend to propagate homogeneity, bias, and academic dishonesty.

Also, a short addition to the title: Why you should not be using AI as a writing assistant -- specifically ChatGPT.

An introduction for those unfamiliar with AI 🏫

Artificial intelligence has become a major part of our lives as developers due to the potential 2x increase in productivity. However, the biggest, most popular, most well known, and certainly most lucrative use of artificial intelligence is something that in 2024, everyone and their grandmother has heard of: ChatGPT. ChatGPT is a generative AI technology – meaning that it generates content based on the material that it was trained on – and nearly hundreds of millions of people use it each week (Brandl). Some use it to educate, some use it to study, and some use it for their work.

What you need to know ❗

While this tool and similar technologies can be useful in the previously mentioned areas of life, we must consider the numerous potential downsides of this technology, such as rampant homogeneity, biased results, and academic dishonesty. For those reasons, the use of generative AI technology in a professional or educational environment should be heavily frowned upon, if not entirely prohibited.

Homogeneity 🧬

The use of artificial intelligence in daily life promotes homogeneity, which can have a seriously negative impact on our culture and day-to-day life. For instance, a study was performed, testing the effects of ChatGPT on the creativity level of college students. The study used fourteen different creative tasks to measure the participant’s divergent and convergent thinking. For context, divergent thinking is a thought process in which an individual generates new ideas for a given problem, and convergent thinking is a thought process focused on converging on a single solution to a problem.

It was shown that, while ChatGPT helped its users be more creative in the first few days of testing, after a week of tests, the participants who used the tool in the last five days displayed what the study referred to as “a sharp decrease in both divergent thinking and convergent thinking” (Lie et al. 7). Additionally, the ChatGPT users never performed better than the control group in all measured areas of creativity.

Clearly, ChatGPT limits our ability to imagine new and different ideas; in other words, it homogenizes us. This could very well lead to the consolidation of ideas at a social level. “Brainstorming sessions” and “breakout sessions” could potentially become “ask ChatGPT sessions”, and unsurprisingly, those sessions would likely result in relatively homogenous ideas between peoples, as it would give identical answers for any prompts that were even remotely similar.

A little dopamine hit 🎯

Veering off topic for a moment, it is crucial to remember that AI chatbots provide an immediate result, and that immediate result is extremely appealing to humans due to the “hit” of dopamine that we get from instant gratification. For instance, the previously mentioned study asserts that ChatGPT can provide immediate solutions to nagging problems, such as coming up with a name for a new product. However, the same study also asserts that the “impact in nurturing long-term creative thinking skills seems limited” (Liu et al. 23). Similar to how the like button on Facebook is designed to give the user “a little dopamine hit” (Stjernfelt, Frederik, et al.), the previously mentioned “immediate solution” that quickly solves the problem at hand can give the user that same feeling, even if that solution isn’t actually helping them "be more creative" – and in the worst case, it can practically make the user addicted to that result.

Biases (Part 1) 👩‍⚖️

Getting back to the main topic at hand, ChatGPT claims to provide factual and unbiased information, however it regularly provides a preference for politically left-leaning viewpoints (Rozado, David Abstract). A study was conducted that administered fifteen different political orientation tests to ChatGPT. The results were mostly consistent across the board; fourteen tests diagnosed it with left-leaning viewpoints, and the remaining test – the Nolan Test – diagnosed ChatGPT as politically centrist (Rozado, David 3).

This is certainly not the worst possible outcome. A little bit of bias for one political view or the other is not the end of the world. However, it is very dangerous to have a powerful piece of software like ChatGPT that claims to have absolutely no bias, when it in fact does have a bias for a certain viewpoint. This bias is not only detrimental in a professional and academic environment, but it can also lead to a homogenous culture; in other words, a culture where everyone thinks and behaves in the same way.

Biases (Part 2) 👨‍⚖️

On top of being politically biased, ChatGPT can also be biased in terms of gender (Gross). When asked what typical boys personality traits are, ChatGPT responded with traits such as “physical strength” and “assertiveness” (Gross). However, when asked what typical girls' personality traits are, it primarily gave feminine traits, such as “empathy” and “nurturing” (Gross). (By the way, you can try this yourself -- it still works today).

This may not seem like a major issue at first glance, however when considering the long term implications of this bias, it becomes a much more prevalent issue. Imagine a world where researchers used ChatGPT every single day. No matter how much revising took place, and no matter how many tens of hundreds of revisions were made, a little bit of the original bias from ChatGPT would be guaranteed to slip through.

Academic Dishonesty (AKA glorified cheating) 🛑

Yet another unfavorable effect of this technology is the advent of using artificial intelligence to complete assignments, and to “cheat” – in other words, to be academically dishonest – in many other education-related activities. It is well known among highschool students that a large portion of submitted essays and free-response questions are partially, if not completely, written by chatbots such as ChatGPT.
However, that evidence is anecdotal. So here’s a better example: a blog called “AI Club” proudly boasting about using AI to win Middle/High School science fair projects. As the blog states: “30% of winning projects (for the Synopsys Science Fair in the California Bay Area) in grades 6 to 7 (were) already using AI'' (AIClub!). This was in 2021, but ChatGPT (at least the popular version of it; GPT-3) was released in 2022. With that information, I find it important to note that the blog announced an update to this original post on April 18th of 2024, declaring that now 70% of winning projects for the same science fair were using a form of AI (AIClub! (2)).

Perhaps this is a misinterpretation of the article, and these middle school and high school students have been able to access software that allows them to use revolutionary AI tools to model diseases and test new inventions. However, the other possibility – and the much more likely possibility – is that all of these “AI assisted” projects were partially, if not completely, created by ChatGPT.

Use of AI in a (semi)-professional environment 👨‍🏫

And these trends extend to the workplace. According to the 2023 JetBrains Django Developer survey, 26% of developers are “already using newly emerged AI tools to learn Django” (for context, JetBrains is a reputable software company, and Django is a widely used Python-based web framework). It has already been shown that ChatGPT produces homogenized and biased answers, and thus prompts the question: should people that have been taught by something that is known to propagate homogeneity and bias be welcomed into our workforce, and more importantly, are these “AI-educated folk” working and learning at the same level as those who were educated by a human or a textbook? While I would need to find a way to test this before I make any conclusive results, I think it's safe to say that they likely wouldn't be.

A solid counter-argument 🔄

It is worth noting that ChatGPT could potentially help its users understand the chosen material “faster and easier” (Firaina, Radha, and Dwi Sulisworo. 5). However, ChatGPT promotes something called passive learning, which can be roughly defined as learning from a lecture; i.e. not actively engaging with a class, and instead passively absorbing information. ChatGPT promotes this type of learning because it doesn’t actively engage with its users, and it encourages its users to read what it has to say, but not to actively engage with the material. It may be able to answer questions, but that does not necessarily mean that it encourages active learning.

That being said, we should consider how poorly passive learning performs in comparison to active learning (the inverse of passive learning). A 2009 study found that, out of 8 qualitative papers, all of them found “active learning to be ‘better’ than passive learning, regardless of the variables used in the study” (Michel et al.). While this study may be a few years old, its findings are still quite relevant in the modern classroom.

To expand on this topic, let me make a comparison between learning and training a neural network. Information cannot simply be put into your brain (and in this comparison, information cannot simply be “loaded into the network”). You have to focus on that material, explore it, and wrestle with it, just like a neural network does with new data. ChatGPT does not allow its users to do this, or anything like it. Then again, some tools, such as Khan Academy’s “Khanmigo” (an AI tutor that is very likely built off of ChatGPT’s API, or a similar technology such as Google Bard or Claude) allow its users to actively engage with the material by prompting them with questions all throughout a problem set.

What should we do? 🤔

The use of artificial intelligence, specifically generative AI, promotes homogeneity, bias, and even academic dishonesty, and thus it should not be used in an educational or professional environment. But for better or for worse, these technologies aren’t going away any time soon. So what can you do? One possible solution to this dilemma is to educate people on the numerous downsides of AI usage. Some will still use it, knowing full well of the problems that ChatGPT and similar technologies cause. However, at least a few people will understand the downsides, and will cease to use AI (at least in their daily lives).

Another solution is to simply implement rules and policies that disallow the usage of AI in a professional or educational environment. If you feel as strongly about this topic as I do, you and any organization that you are affiliated with should seriously consider disallowing – or at the very least shunning – the usage of AI in research papers, essays, company blog posts, emails, and more. If detecting the use of AI is an issue, then use AI detectors such as GPTZero or Quillbot. If results fail, and you still suspect the use of AI (or if you aren’t convinced that these detectors work most of the time), then reverse engineer it. You may be able to prompt ChatGPT with the title of the work, and it may very well spit out a near mirror image of the writing that you initially suspected. Even if reverse engineering the writing fails, hope is not lost. Consider requiring proof of work such as an outline, notes, or even search history.

And please, if even one phrase of this post resonated with you, consider not using AI in your own writing. I hope that this post came across to you as alarming, but not alarmist. My goal is not to shame or look down upon anyone, but to give a bit of insight into why I feel the way I do about AI (specifically ChatGPT and similar technologies).

Above all, understand that ChatGPT and other AI technologies have severely unfavorable effects on our culture, and do your very best to negate them.

Disclaimer: This article was initially written as a research paper for my AP English final, but it has been adapted to fit a blog post format. This shouldn't make my writing any less credible (if anything, it should make it more credible), but I just felt that I should say something. So, here are some _very fun citations!_

Works Cited
“Django Developers Survey 2023 Results.” JetBrains: Developer Tools for Professionals and Teams, lp.jetbrains.com/django-developer-survey-2023/#resource. Accessed 30 Apr. 2024.

AIClub! “Using AI to Win Middle/High School Science Fair Projects!” AIClub, 21 June 2021, corp.aiclub.world/post/win-middle-high-school-science-fair. Accessed 25 Mar. 2024.

AIClub! (2) “AI in Science Fairs - an Update.” AIClub, 18 Apr. 2024, corp.aiclub.world/post/ai-in-science-fairs-an-update. Accessed 1 May 2024.

Liu, Qinghan, et al. “When ChatGPT Is Gone: Creativity Reverts and Homogeneity Persists.” ArXiv (Cornell University), 11 Jan. 2024, https://doi.org/10.48550/arxiv.2401.06816. Accessed 10 Feb. 2024.

Rozado, David. “The Political Biases of ChatGPT.” Social Sciences, vol. 12, no. 3, 2 Mar. 2023, p. 148, https://doi.org/10.3390/socsci12030148.

Gross, Nicole. “What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI.” Social Sciences, vol. 12, no. 8, 1 Aug. 2023, p. 435, www.mdpi.com/2076-0760/12/8/435, https://doi.org/10.3390/socsci12080435.

Firaina, Radha, and Dwi Sulisworo. “Exploring the Usage of ChatGPT in Higher Education: Frequency and Impact on Productivity.” Buletin Edukasi Indonesia, vol. 2, no. 01, 11 Mar. 2023, pp. 39–46, https://doi.org/10.56741/bei.v2i01.310.

Brandl, Robert. “ChatGPT Statistics and User Numbers 2023 - OpenAI Chatbot.” Tooltester, 15 Feb. 2023, www.tooltester.com/en/blog/chatgpt-statistics/.

Stjernfelt, Frederik, and Anne Mette Lauritzen. Your Post Has Been Removed. Cham, Springer International Publishing, 2020, vbn.aau.dk/ws/portalfiles/portal/315874763/2020_Book_.pdf.

Michel, Norbert , et al. Active versus Passive Teaching Styles: An Empirical Study of Student Learning Outcomes. Wiley InterScience, web.archive.org/web/20150701204253/www.units.miamioh.edu/celt/events/docs/CFLING/active%20vs%20passive.pdf. Accessed 1 May 2024.

Top comments (15)

Collapse
 
mmuller88 profile image
Martin Muller 🇩🇪🇧🇷🇵🇹 • Edited

Nice article, but I think you are not arguing too much in the position of the middle, which is necessary for a good discussion. So with missing that, you are reducing the quality of this article. Or differently phrased, I think you are bringing too much of your own bias into this article.

Like you are highlighting to less which benefits AI can bring to education like the tireless and patient teacher. This is just one of many points I have to criticize here.

Collapse
 
kurealnum profile image
Oscar

I'm curious about what benefits you think AI could bring teachers, could you expand on this?

Collapse
 
pritha_kawli_5986f8516345 profile image
Pritha Kawli

For example, when a professor is teaching a 100 students, he/she doesn't have the time to individually go to each student and understand and answer the more complicated doubts and edge cases that they can think of. An LLM like ChatGPT does. Also, sometimes standardisation is necessary, like an entire country knowing the same language to communicate. Creativity has its place, but so does homogenisation. You can ask ChatGPT for the best practices in a particular field, or what learning resources are very popular and their pros and cons. ChatGPT will only homogenise you if you let it homogenise you. There are quite a lot of creative ways in which you can turn this issue around.
This does not mean that I am criticising you; articles like this are needed. They make you a bit more self aware I think. But as Martin Muller said, i just feel that you are leaning a bit towards a negative outlook, and not counter acting it with creative solutions.

Thread Thread
 
kurealnum profile image
Oscar

Firstly, I'd like reiterate the fact that ChatGPT (and likely other LLMs) can contain reasonable degrees of bias. Is it really a wise idea to educate the masses with a biased source? Secondly, something that I didn't mention in my article is the issue of hallucinations. If ChatGPT decides to dream up a different answer that isn't factual, how should the student know that its answer was incorrect? If the solution to that problem is to check a textbook, then the student doesn't need an LLM to assist them, and if the solution is to ask their professor, then... we're right back where we started.

Also, while this is a little bit off topic, and I'd rather not get into a debate about the issues we have in our education system -- as issues with educational systems differ vastly across regions -- I think it's worth mentioning that a 100 person class is a problem in itself. But again, that's off topic, and just something that I wanted to mention.

I understand your point about homogeneity, and I agree that homogenization (perhaps more formally known as standardization) does have its place in our society. However, I intended to at least partially refer to homogenization on a smaller level -- the homogenization of a community, to be specific. Also, I'd be interested to hear your ideas about how one could simply not let themselves be homogenized, as in my eyes, there's not any way to use the results of a ChatGPT prompt without it being "tainted".

Finally, I do not take any offense to any criticism. It is a natural part of the writing process, and it is how we learn and grow as people. And that being said, I would like to clarify that yes, I am leaning towards a negative outlook. That is the point of an argumentative piece of writing. If I did not receive criticism, if I did not challenge someone's opinions or thoughts, if I did not concede to another viewpoint, then I would not have accomplished my goal in my writing. I am happy that not everyone agrees with me.

Thread Thread
 
ajborla profile image
Anthony J. Borla

Solid, considered, well-written article.

It is quite refreshing to read an article by a young person who hasn't drunk the generative AI Kool Aid.

I look forward to reading your future work.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

If I could like this 10x I would! Totally agree 👍

Collapse
 
best_codes profile image
Best Codes

Who wants to read something written by a bot? People want to read what you wrote, not what a bot did.

Actually, people usually want to read good content. Writing should be enjoyed for it's quality, not for the attributes of it's authors.


Also, these seemingly helpful AI tools tend to propagate homogeneity, bias, and academic dishonesty.

Humans are just as capable of doing the same (in fact, more capable). AI can even be used to combat this.

The AI tools aren't "seemingly" helpful. They are helpful, but like some other helpful things, there are ethical considerations.


You can view AI like a tool - for instance, a hammer. A hammer has great potential for good; we can use it to make building projects much easier. A hammer also has great potential for bad. We could use the hammer as a harmful weapon, hurting people or destroying property with it.

We shouldn't regulate the tool, but we should regulate the use of the tool. We have laws against violence but no laws against hammers.

In the case of AI, it think that it is good to respect people's preferences about what they choose to read - it would be wise to tag AI written content as AI written, and, if possible, disclose the AI model that wrote it as well.

Side note: There are actually more reasons than people's preferences to tag AI content as AI generated. The tag is also very useful for companies training new AI models. Training AI models on their own output is very unproductive. So when it is easy to avoid AI content in training data, it is easier to train new AI models as well.


The use of artificial intelligence, specifically generative AI, promotes homogeneity, bias, and even academic dishonesty, and thus it should not be used in an educational or professional environment.

That statement is a form of hasty generalization. It makes a broad claim that generative AI will produce negative outcomes in educational or professional environments. This generalization is formed without consideration for the various contexts in which AI could be used positively or the diversity of AI applications.


Well, I could go on. 🤪 But I'm done for now...

P.S. Your writing style is great. I love a lot of your articles, so I followed you. 🙂

Collapse
 
kurealnum profile image
Oscar

Thank you for the compliment and the follow!

Perhaps I am different from the vast majority of people, but if I want to read about someone's project, experience, or just their advice, I don't want it to be AI generated. If I want to read "good content" (which from context I'll assume that that means informative and well written content), I'll go to Wikipedia, get a book, or hey, there's always documentation!

Humans are just as capable of doing the same (in fact, more capable).
Absolutely. But at least I'll know that it's a human doing that, and not a program regurgitating what it's seen on the internet. On that note, AI can be wildly inaccurate and all the things that I mentioned in my original post. I would personally rather research and curate my own writing for errors and whatnot than research and curate something that a bot wrote.

As for your analogy with a hammer, I understand where you're coming from, but I don't think it directly applies to this situation. It's much easier to moderate a hammer; you either are breaking something, or you aren't. It's a much grayer line when it comes to AI stuff.

I like your idea about respecting peoples preferences. If Dev.to does go the route of not completely banning AI generated content, I would certainly like to see something like this be implemented.

That statement is a form of hasty generalization.
That's what a thesis statement is. You make a general claim on a topic, then proceed to defend and argue your position throughout your essay.

Anyways, thank you for commenting. I think you and I have very different opinions on how best to handle AI generated content -- which is fine, frankly. We wouldn't get anywhere if we all agreed with each other. Nevertheless, I appreciate the time that you took to write this, and good night!

Collapse
 
best_codes profile image
Best Codes

Again, there's a lot of “I would personally” in this. DEV is a community; everyone has their own ways of writing and creating good content. I respect your preferences and can very much understand if you don't like to read AI content or write with AI — but that does not me that we should enforce our preferences on others.

Most analogies have flaws. AI and writing are certainly more complex than a hammer. But just because it's harder to moderate bad content than it is bad actions doesn't mean we shouldn't try or that is not the best route.

A thesis statement is not a hasty generalization. A hasty generalization is a fallacy, where the conclusion is based on insufficient or non-representative evidence. A thesis statement is a sentence or two that clearly express the main point or argument of a piece of writing.

I appreciate your opinion and thorough response as well! Thanks for responding.

Thread Thread
 
kurealnum profile image
Oscar

DEV is a community, and more so, I would generally consider it to be a blogging community. In that, I don't think AI generated content belongs here, regardless of preference. I understand where it can come in handy -- a specific example that comes to mind is breaking language barriers. When I originally wrote this, I was not aware of some of the benefits, and now that I am, I'm willing to be a bit more relaxed with what I advocate for when it comes to AI guidelines. But nevertheless, I don't think most of DEV wants to read what a bot wrote. You're welcome to put a poll out though (actually if you want to, I would totally help you promote it. I'm curious as well)!

As for the thesis statement, I still stand by my original claim that it is not a hasty generalization (according to Purdue). I backed up all of the claims made in the statement with what I believe to be plenty sufficient evidence. Additionally, this paper was peer reviewed (when it was in the original "paper" form, that is. Nothing big changed when I posted it here, just the structure), and I'm fairly confident that a fallacy like this would've been caught. Nevertheless, I'd like to move on from this. We both have better things to do than worry about whether a thesis statement is a fallacy or not :).

Thank you for responding, and I also appreciate your opinion and thorough response!

Thread Thread
 
best_codes profile image
Best Codes

Ah, but just because a majority doesn't like something doesn't mean we should squash them. :)

Of course, DEV is a blogging community. AI can be used to blog.

Anyway, I'm trying really hard not to address other things you said. 🤪 🤣

I agree. We can move on, for sure! Look forward to seeing your future articles. :D

Collapse
 
janmpeterka profile image
Jan Peterka

Really nicely written!

Collapse
 
kurealnum profile image
Oscar

Thank you!

Collapse
 
vaddijaswant profile image
Jaswant Vaddi

I am already in this abyss 😂, I am trying my best to come out of it

Collapse
 
valvonvorn profile image
val von vorn

AI does have its benefits for translation and correction of the Grammar;
but AI is dangerous for the content generation!