DEV Community

Md Abu Taher
Md Abu Taher

Posted on

Why you shouldn't rely on ChatGPT so much!

As a developer, I'm always on the lookout for new tools and techniques that can streamline my workflow and save me time. Recently, I decided to give ChatGPT a try for web automation.

My goal was to extract data from HackerNews using ChatGPT, so I fed it a series of prompts and hoped it would generate the appropriate regex for me.

Oh! I was so wrong!

Unfortunately, despite its impressive capabilities and hype, ChatGPT ultimately failed to deliver the goods.

While it did generate some text that sounded like it might be on the right track, it didn't actually solve my problem. This was both good news and bad news.

On the one hand, it showed that ChatGPT was learning and adapting to meet my needs. On the other hand, it didn't actually get the job done.

Image description

In the end, I decided to take matters into my own hands and started adjusting the generated regex myself. It took some trial and error, but eventually, I was able to get it working.

Steps I have taken

I told ChatGPT that it should become a regex generator and the output should be in JavaScript.

Image description

I simply copied and pasted the text from HackerNews.

Image description

Provide ChatGPT with the desired input and output for the regex.

Image description

I received a sample output from ChatGPT, and it sounded like it was absolutely accurate.

Image description

To validate it, I took it to regex101, which would highlight everything for me.

Image description

It looks like it worked, except for one result. So I went back to ChatGPT and provided more information.

Image description

But instead of fixing it, it just broke the whole regex. Everything, including time, is now broken.

Image description

I continued to iterate on this process, using ChatGPT’s output as a starting point and making manual adjustments until the desired regex was obtained.

Image description

At this point, this process needed a lot of tries, which used up my precious credits that I could spend on other things.

¯_(ツ)_/¯

Image description

So I decided to take matters into my own hands and tweak the regex myself.

Image description

The final regex was this, which worked for most stories.



^(?<rank>\d+).\s+(?<title>.+)\s+((?<site>.+))\s+(?<points>\d+)?\s+points?\s+by\s+(?<author>\S+)?\s+(?<time>.+?)\s+|\s+hide(\s+|\s+(?<comments>\d+)\s+comments)?

Enter fullscreen mode Exit fullscreen mode




Conclusion of the experiment

Even though there are hundreds of other ways to grab the data from HackerNews, using AI to extract data from a blob of text was an interesting challenge to try with ChatGPT. But this experiment just told me one thing.

Do not blindly trust GPT and variants. It can give you incorrect data with very high confidence and your precious app will be broken in no time.

Okay, I could have given it better input, but keep in mind that I tried this process many times with different input. Meanwhile, it kept adding big blobs of text to the response, even when I didn’t need them. I've noticed that it frequently adds large blobs of text despite my requests.

For example, this prompt specifically mentions avoiding providing explanation, but who cares?

What does ChatGPT have to say about this?

I told ChatGPT about the above situation, and it sympathized with me.

I’m glad you found ChatGPT to be a helpful tool in your workflow, even if it didn’t quite solve the problem you were working on. It’s important to remember that large language models like ChatGPT, while extremely powerful and versatile, still have limitations when it comes to certain tasks.

In your case, using a large language model for web automation was a novel approach, and it’s not surprising that it didn’t work perfectly on the first try. Web automation can be a complex task that requires a deep understanding of web technologies, as well as specific domain knowledge about the sites and data you are trying to access. In these cases, it’s often best to use specialized tools or rely on your own expertise and understanding of the problem.

That being said, it’s always worth considering whether a large language model could be helpful in solving a specific problem. They can be a powerful tool in the right context, and with the right amount of training and guidance, they can learn and adapt to new tasks and environments. The key is to use them in a way that complements your own skills and knowledge, rather than relying on them as a replacement.

Conclusion

This taught me about ChatGPT's limitations. Even though they can write text that sounds human, do a wide range of tasks, and speak many languages, they may not have the technical expertise to do a specific task as well as a specialized tool or a person with expertise in that area.

My own knowledge and skills helped me solve the problem, not ChatGPT. This shows why language models need human understanding and expertise. Large language models can be helpful, but they shouldn't replace human understanding.

You should not depend too much on ChatGPT.

Top comments (7)

Collapse
 
maymeow profile image
Meow

OK it doesn't always return a completely relevant answer but I think it's faster than searching the web and going through the many posts that are there.

When you are working with something new, even something like the name of an internal function can speed up the work (I can find the rest in the documentation). I like GPT, but it's not really for mindless copying of source codes.

Collapse
 
nyandry profile image
Ny Andry • Edited

One thing we need to keep in mind is that: It is a machine learning model that has been designed to generate text based on patterns and structures it has learned from the data it was trained on, but it does not have the ability to acquire new knowledge or understand the meaning of the text it generates in the same way that a person would. That alone makes it unreliable. It's useful to some extent and truly revolutionary but it's not good enough to replace anyone except high-school essay writers — maybe.

Collapse
 
lorenz1989 profile image
lorenz1989

We have to take control of it, rather than relying on it completely. If you ask it complex, logical, and reasoning questions, ChatGPT is currently not able to do so. For example, if I ask it "if a rainbow has 7 colors, then how many colors does two rainbows have?", it will obviously use math like 7+7 or 2*7 as it has been programmed to do. In this case, its answer would be certainly wrong. However, there are many things that ChatGPT can do, which can help users save time and increase productivity.

This extension makes it possible everywhere you browse

Whether you are content creators, developers, designers, students, or stay-at-home parents, the extension is for you.

It gives you surprisingly detailed solutions to your queries - from writing a prom proposal to fixing your code.

Install the extension and sit down for specific answers to your subject!

https://chrome.google.com/webstore/detail/chatgpt-for-search-engine/feeonheemodpkdckaljcjogdncpiiban/related?hl=en-GB&authuser=0

Collapse
 
potkopf profile image
Topf Kopf

Thats exactly what I observed. I am working on algotithms and tryed to generate some of mine by chatgpt to see if it works or even comes with a better solution. But what I get are almost naive and kind of silly approaches with the same mistakes I didn't even make in the first place. So yes, shure, it can help you to boost productivity in certain cases but really, don't rely on it to much and always double check!

Collapse
 
webjose profile image
José Pablo Ramírez Vargas

Isn't the title of this article ahead like a decade or so? Because I see zero companies interested in firing developers in favor of ChatGPT as of now. 😄

Collapse
 
entrptaher profile image
Md Abu Taher

I have seen tons of beginners relying on ChatGPT already. Firing is not the only reason here

Collapse
 
jonrandy profile image
Jon Randy 🎖️

It's glorified autocomplete. It doesn't know or understand anything.