Wednesday, January 28, 2026

Are AI Code Assistants Getting Better or Worse

A friend of mine sent me a link to an opinion piece in the IEEE Spectrum - AI Coding Assistants Are Getting Worse –> Newer models are more prone to silent but deadly failure modes

Are AI code generators getting worse? The tl;dr  in this article is “Yes” because companies are letting poor programmers train the AI. You should read the article though.

It’s not deliberate of course. It’s just the way the internet works. AI software is not checking to see if the information it is getting is good in absolute terms. It is just checking to see if the user is happy. In the user is happy because they don’t realize that what they have is bad how is the AI to know?

The term GIGO - Garbage In, Garbage Out may not be repeated as often as it used to be but it is still true! We have to be careful about who and how artificial intelligence is trained. Do an internet search for “Chatbot goes bad” sometime and you’ll find a large number of cases where AI chatbots have been trained badly. Sometimes trained maliciously. Sometimes just trained on poor data sets.

TO me this trend points out a couple of things that we need to teach beginners. In the words of Ronald Regan, “Trust but verify.” Students need to test their code. Students need to be able to read and understand code. Programmers have to be able to determine if AI it taking shortcuts like leaving out error handling, data validation, and other errors of omission.

We also need to prepare students to think about how AIs are being trained so that they learn how to train AIs well themselves. Even if coding is dead, as one of my former students claims, people will still have to train AI, ask AI good questions, and be able to understand if they are getting the value from AI that they want, need, and think they are getting.

No comments: