The siren song of ChatGPT and junior devs that listen to it

When Chat-GPT launched, many fresh developers used it to finish assignments faster or even tackle more serious tasks at their jobs – some more than others. In their case, they quickly found themselves using GPT for most of their work. Let me tell you how I almost fell for the same trap.

Bruno is a software engineering intern at Infobip.

Bruno Licht

Bruno is a software engineering intern at Infobip.

The honeymoon phase

The end of spring was coming near, and with it came the deadlines for multiple projects I needed to finish to pass another year at my college. I was building a Spring boot application with React as frontend. Having always hated doing front-end work, I slowly chipped away at building the website UI for the app. 

I looked through a couple of tutorials to find the solution to my problem. Still, not even the famous Stack Overflow website contained the knowledge or the hints necessary to push me in the right direction. I heard of Chat-GPT many months ago, including many stories of what good solutions it provided.

As a skeptic at the time, I condemned others for using it as an easy way of solving problems. As I closed another tutorial, I glanced over to the side of the website, and there it was: an ad about Chat-GPT. I thought to myself, to hell with it. If this thing does not help me, I’m choosing another topic for my project. 

I launched the website for the first time, signed in with my Google account, clicked all the needed consent options without even looking (which almost came at a price later in my life) and typed in my requirements for the problem. And there it was, typing the solution to my problem before my eyes. I copied it, shoved it into the proper place in my code, and fired it up. My first reaction was, “How the hell does this work?!”. I was amazed. 

Another problem, another GPT solution. At the moment, it felt like magic. I have only seen such things in movies until now, and yet there it was, a reality.

Diminishing returns

A week had passed, and I had made great progress on my project. With most of the work done, I felt I had time to deal with some of the other assignments waiting for me. I started typing the base code needed to support the solution and realized how big of a hassle it would be to type all the necessary code. 

I switched over to my now-favorite website and asked it to give me a generic solution. But this time, it didn’t give me a ready-baked solution. The output was just a set of textual instructions on how the solution should be written. A bit frustrated at first, I thought, alright, I will just ask him about each part of the instruction (even though I knew how to complete them). 

Sometimes, I got a perfectly good solution; other times, it produced code that was either too generic or completely wrong. This pattern continued for a day or so when I finally realized:

Developed dependency

I became too lazy to write my own code. I felt bad for relying on it this much. I took the rest of the day off, hoping I would feel better by the next day. But as I started typing, my mind wandered off to GPT whenever I had an issue for which I couldn’t think of a solution in under 2 minutes.

At that point, I knew I had gone too far, But I decided against it, staring at the screen once again for half an hour until my wheels started turning on their own again. It felt good to tackle new issues on my own again. Of course, I still consulted Stack Overflow for possible solutions, but as I came across the wrong ones, I analyzed why it wouldn’t work in my case or, rather, why it was probably too complex for my needs.

 The old pattern of wasting time thinking of different solutions was back, and I was glad.

Put trust in yourself, not LLMs

It’s quite easy to be fooled by its power and speed in delivering new ideas and solutions, but more than generated wrong code and claimed it works more than a few times. At that point, you must patiently guide it until it comes to a correct solution.

And please think twice before pasting a block of code in it. By default, anything you ask GPT is stored for later use in training. Not to mention, sensitive information should be omitted from your question to avoid leaking personal information or, even worse, unauthorized access to the company’s network. 

Luckily, there is an option to disable such behavior, but most new users aren’t even aware that GPT is learning from its users’ input.

Use with caution

This may read as if I’m against using Chat-GPT as a means of coding, but on the contrary, it serves as a warning. It can help you analyze your code, give you helpful tips on improving it, and point out any potential bugs that could occur. In that regard, it’s like a mentor that’s available 24/7. 

You can also ask how something specific works if you don’t have time or energy to go through documentation pages. But as I advised before, you should double-check the information given if you are doing serious work that will not only be used by you. 

Ultimately, it all boils down to using GPT responsibly and allowing yourself to be challenged first before relying on the power of AI.

Feb 5th, 2024
4 min read
Bruno is a software engineering intern at Infobip.

Bruno Licht

Bruno is a software engineering intern at Infobip.