TLDR: Who wants to read something written by a bot? People want to read what you wrote, not what a bot did. Also, these seemingly helpful AI tools ...
For further actions, you may consider blocking this person and/or reporting abuse
Nice article, but I think you are not arguing too much in the position of the middle, which is necessary for a good discussion. So with missing that, you are reducing the quality of this article. Or differently phrased, I think you are bringing too much of your own bias into this article.
Like you are highlighting to less which benefits AI can bring to education like the tireless and patient teacher. This is just one of many points I have to criticize here.
I'm curious about what benefits you think AI could bring teachers, could you expand on this?
For example, when a professor is teaching a 100 students, he/she doesn't have the time to individually go to each student and understand and answer the more complicated doubts and edge cases that they can think of. An LLM like ChatGPT does. Also, sometimes standardisation is necessary, like an entire country knowing the same language to communicate. Creativity has its place, but so does homogenisation. You can ask ChatGPT for the best practices in a particular field, or what learning resources are very popular and their pros and cons. ChatGPT will only homogenise you if you let it homogenise you. There are quite a lot of creative ways in which you can turn this issue around.
This does not mean that I am criticising you; articles like this are needed. They make you a bit more self aware I think. But as Martin Muller said, i just feel that you are leaning a bit towards a negative outlook, and not counter acting it with creative solutions.
Firstly, I'd like reiterate the fact that ChatGPT (and likely other LLMs) can contain reasonable degrees of bias. Is it really a wise idea to educate the masses with a biased source? Secondly, something that I didn't mention in my article is the issue of hallucinations. If ChatGPT decides to dream up a different answer that isn't factual, how should the student know that its answer was incorrect? If the solution to that problem is to check a textbook, then the student doesn't need an LLM to assist them, and if the solution is to ask their professor, then... we're right back where we started.
Also, while this is a little bit off topic, and I'd rather not get into a debate about the issues we have in our education system -- as issues with educational systems differ vastly across regions -- I think it's worth mentioning that a 100 person class is a problem in itself. But again, that's off topic, and just something that I wanted to mention.
I understand your point about homogeneity, and I agree that homogenization (perhaps more formally known as standardization) does have its place in our society. However, I intended to at least partially refer to homogenization on a smaller level -- the homogenization of a community, to be specific. Also, I'd be interested to hear your ideas about how one could simply not let themselves be homogenized, as in my eyes, there's not any way to use the results of a ChatGPT prompt without it being "tainted".
Finally, I do not take any offense to any criticism. It is a natural part of the writing process, and it is how we learn and grow as people. And that being said, I would like to clarify that yes, I am leaning towards a negative outlook. That is the point of an argumentative piece of writing. If I did not receive criticism, if I did not challenge someone's opinions or thoughts, if I did not concede to another viewpoint, then I would not have accomplished my goal in my writing. I am happy that not everyone agrees with me.
Solid, considered, well-written article.
It is quite refreshing to read an article by a young person who hasn't drunk the generative AI Kool Aid.
I look forward to reading your future work.
If I could like this 10x I would! Totally agree ð
Actually, people usually want to read good content. Writing should be enjoyed for it's quality, not for the attributes of it's authors.
Humans are just as capable of doing the same (in fact, more capable). AI can even be used to combat this.
The AI tools aren't "seemingly" helpful. They are helpful, but like some other helpful things, there are ethical considerations.
You can view AI like a tool - for instance, a hammer. A hammer has great potential for good; we can use it to make building projects much easier. A hammer also has great potential for bad. We could use the hammer as a harmful weapon, hurting people or destroying property with it.
We shouldn't regulate the tool, but we should regulate the use of the tool. We have laws against violence but no laws against hammers.
In the case of AI, it think that it is good to respect people's preferences about what they choose to read - it would be wise to tag AI written content as AI written, and, if possible, disclose the AI model that wrote it as well.
Side note: There are actually more reasons than people's preferences to tag AI content as AI generated. The tag is also very useful for companies training new AI models. Training AI models on their own output is very unproductive. So when it is easy to avoid AI content in training data, it is easier to train new AI models as well.
That statement is a form of hasty generalization. It makes a broad claim that generative AI will produce negative outcomes in educational or professional environments. This generalization is formed without consideration for the various contexts in which AI could be used positively or the diversity of AI applications.
Well, I could go on. ðĪŠ But I'm done for now...
P.S. Your writing style is great. I love a lot of your articles, so I followed you. ð
Thank you for the compliment and the follow!
Perhaps I am different from the vast majority of people, but if I want to read about someone's project, experience, or just their advice, I don't want it to be AI generated. If I want to read "good content" (which from context I'll assume that that means informative and well written content), I'll go to Wikipedia, get a book, or hey, there's always documentation!
As for your analogy with a hammer, I understand where you're coming from, but I don't think it directly applies to this situation. It's much easier to moderate a hammer; you either are breaking something, or you aren't. It's a much grayer line when it comes to AI stuff.
I like your idea about respecting peoples preferences. If Dev.to does go the route of not completely banning AI generated content, I would certainly like to see something like this be implemented.
Anyways, thank you for commenting. I think you and I have very different opinions on how best to handle AI generated content -- which is fine, frankly. We wouldn't get anywhere if we all agreed with each other. Nevertheless, I appreciate the time that you took to write this, and good night!
Again, there's a lot of âI would personallyâ in this. DEV is a community; everyone has their own ways of writing and creating good content. I respect your preferences and can very much understand if you don't like to read AI content or write with AI â but that does not me that we should enforce our preferences on others.
Most analogies have flaws. AI and writing are certainly more complex than a hammer. But just because it's harder to moderate bad content than it is bad actions doesn't mean we shouldn't try or that is not the best route.
A thesis statement is not a hasty generalization. A hasty generalization is a fallacy, where the conclusion is based on insufficient or non-representative evidence. A thesis statement is a sentence or two that clearly express the main point or argument of a piece of writing.
I appreciate your opinion and thorough response as well! Thanks for responding.
DEV is a community, and more so, I would generally consider it to be a blogging community. In that, I don't think AI generated content belongs here, regardless of preference. I understand where it can come in handy -- a specific example that comes to mind is breaking language barriers. When I originally wrote this, I was not aware of some of the benefits, and now that I am, I'm willing to be a bit more relaxed with what I advocate for when it comes to AI guidelines. But nevertheless, I don't think most of DEV wants to read what a bot wrote. You're welcome to put a poll out though (actually if you want to, I would totally help you promote it. I'm curious as well)!
As for the thesis statement, I still stand by my original claim that it is not a hasty generalization (according to Purdue). I backed up all of the claims made in the statement with what I believe to be plenty sufficient evidence. Additionally, this paper was peer reviewed (when it was in the original "paper" form, that is. Nothing big changed when I posted it here, just the structure), and I'm fairly confident that a fallacy like this would've been caught. Nevertheless, I'd like to move on from this. We both have better things to do than worry about whether a thesis statement is a fallacy or not :).
Thank you for responding, and I also appreciate your opinion and thorough response!
Ah, but just because a majority doesn't like something doesn't mean we should squash them. :)
Of course, DEV is a blogging community. AI can be used to blog.
Anyway, I'm trying really hard not to address other things you said. ðĪŠ ðĪĢ
I agree. We can move on, for sure! Look forward to seeing your future articles. :D
Really nicely written!
Thank you!
I am already in this abyss ð, I am trying my best to come out of it
AI does have its benefits for translation and correction of the Grammar;
but AI is dangerous for the content generation!