Tips for Working Alongside Generative AI Tools
The recent wave of generative AI products in their many forms (Large Language Models, Image Generators, etc.) have sparked a sense of wonder, excitement, and perhaps even fear in the hearts of many.
As a software developer, I've got a lot of existential questions right now. Will I be out of a job in a few years?
And once that happens, will the robot overlords still look kindly upon me?
Hopefully I can point to this article to prove that I made every best effort to educate and prepare the humans for their smooth arrival.
Speaking of which, I was recently tasked with using AI at work and I want to share my learnings. My employer (Asurion - great place to work 😁) recently decided to host an AI themed developer hackathon, that we are calling the "Asurion Code Jam".
The aim of this "jam" will be to provide dedicated work hours for us to try out writing software utilizing new AI tools. Participants will get to learn while building, and we are emphasizing that they don't have to concern themselves with building viable products if they would rather not.
A hackathon all about education and creativity? I'm in! This was a refreshing approach to me, so I decided to volunteer to help plan the thing.
Fast forward a few months later and the code jam is starting to get real. Time to create the teams! In the spirit of the theme, I became tasked to find a way to use AI to group our participants.
The following is an accounting of that journey - using ChatGPT Plus.
To Hallucinate, or Not to Hallucinate?
Thinking about how to approach this, I was pretty confident that giving ChatGPT (even using the GPT-4 model) over one hundred participant entries with multiple properties for each person and asking it to return them in groups would result in unreliable data.
There are many examples in the wilds of the internet that demonstrate erratic responses if large language models are fed too much input (nonsensical or otherwise), so I knew I had to find an option that would allow me to mitigate hallucinations altogether.
We don't want to publish team rosters that includes pretend people, right? That would be awkward.
Enter Code Interpreter
The new ChatGPT Plus code interpreter tool was the solution that made me realize that I could have my cake and eat it too. AI can do the heavy lifting of writing and testing the algorithm to use for team creation, and I gain the benefit of knowing exactly how the teams were produced. At the end of the day, ChatGPT just hands over some code that I can run myself.
I still trust imperative code more than I do magic chat bots (for now), so this felt like a great way to leverage these new tools without compromising data integrity.
My Process
The following was my process for getting teams created:
I downloaded the spreadsheet with all of the participants from the signup form we sent out to everyone, and resaved it as a
csv
file.I wrote a simple node.js script to read the
csv
file, clean it up a bit, and save it as ajson
file.Enter ChatGPT Code Interpreter. If you'd like to see our full conversation, check it out here. At every step of the way, I would take the code snippet of ChatGPT's latest attempt, copy it over into VS Code, run it locally, and assess the results of the outputted
csv
file.
We finally got to a workable solution, but there was a lot of trial and error in getting to the point where the result was exactly what I needed.
IMPORTANT: I did not give ChatGPT any real participant data. If you ever use public facing generative AI sites for work purposes, you need to think critically about what you're providing it. Personally identifiable information (PII) or other sensitive or proprietary data should obviously be a no no.
AI Wrangling Key Takeaways
Here is what I learned, which I pass on to you as key takeaways for wrangling your own AI's:
Be Crazy Specific
Before the thread with ChatGPT that I linked above, there were other much less successful chat threads where I didn't even get close to a workable solution.
The difference? Whenever I provided more details and context, I got closer. When I didn't provide that clarity, ChatGPT filled in the gaps by making whatever assumptions it wanted, without even running them by me! How rude.
One thing I finally tried really helped: adding a requirements list.
So devs: if you are looking to use ChatGPT to help write the code, you need to put on your product owner hats and clearly articulate the desired outcomes (not saying I did that perfectly, but it helped nonetheless).
If you are having trouble clearly articulating your desired outcomes, figure out why. Is there some missing business context? Is there some ambiguity still floating around in the requirements? Is there some complexity somewhere that needs more unpacking?
When you start the process strong by providing an extremely clear initial prompt, your refinement process will shorten.
Speaking of the refinement process...
Switch to Feedback When You're Close
Depending on the complexity of the desired code, the ChatGPT code interpreter is probably not going to give you what you need on the first attempt. But if you gave it clear instructions (see my first tip), it might come close.
If it does in fact come close to a workable solution, it's time to start a feedback loop instead of starting over with a better requirements prompt. Propose a change, ChatGPT takes another crack at it, and you reassess and repeat as needed.
This stage of the process feels a bit like pair programming with a junior developer. They made some mistakes but they are super eager to try again and get it right.
I found through some early trial and error that the process of refining the solution seems to go smoother if you propose one change or fix at a time, and see how ChatGPT responds to it.
Don't Turn On Autopilot
My last tip when it comes to AI wrangling is this: don't turn your brain off.
Initially there were moments where because AI was writing the code, I inadvertently stopped thinking critically about what it was giving me. It gave me a snippet, I tried running it, it didn't work, and so I just started over without incorporating new directions or feedback that were arguably any better.
A strange sense of laziness was taking over. AI can figure out what I need without me actively participating, right?
Wrong. Because I was not taking the time to comprehend the code and refine my own understanding, I was not giving valuable feedback and we were not getting anywhere. This lasted until the novelty of my new partnership had worn off, and then I realized I needed to start reading, debugging, and actively participating.
I think the larger lesson here is that the output of an AI system is only ever going to be as good as the input it receives. So if we as developers are not steering these tools well, we should not expect the tool to make up the difference.
That makes me a little more hopeful that I'll still have a job in the coming years. 😬
Parting Thoughts
As advertised, this is just a 101 course. If you want to get better with asking AI to do your work for you - you'll need to continue experimenting. The developers of tomorrow are going to have to be both wordsmiths and cyberpunks... not just the latter. Hopefully these insights will help as a starting point for the former.
Top comments (0)