For most of my career I hive been hearing that some day computers will write all the code and human programmers will no longer be needed. Or at least, not as needed as today. Are we getting close to that time – finally? And if we are what does it mean for teaching computer science?
Recently, the CS education world has been discussing GitHub Copilot.
GitHub Copilot uses the OpenAI Codex to suggest code and entire functions in real-time, right from your editor.
While some of the discussion has been about the suit against Copilot (GitHub Copilot litigation) much of the discussion has centered around what it means as a tool for cheating by students. More recently there has been some visibility to the use of ChatGPT to write answers to programming questions.
For example, this year’s Advent of Code seems to have been “invaded” by ChatGPT climbing the leaderboard by answering the problems is seconds. (Adventures With ChatGPT: Advent of Code Edition | Tabs, Not Spaces) Of, perhaps, even more concern to teachers, ChatGPT seems to be somewhat satisfying as a solution to Advanced Placement Computer Science A questions. (ChatGPT passes the 2022 APCSA free response section)
I’d be very surprised if students are not already using these tools. This brings up several questions. One is - how do teachers keep this from happening? We probably can’t. So how do we detect when it does happen? Do we use these tools ourselves to see what sort of code is generated for our assignments? Seems like yet more work for people who don’t have enough time as it is.
Another question, which students are sure to ask, is what is the purpose of students writing code that artificial intelligence can write easier and faster? If you read the article above about putting the APCS A questions through ChatGPT you’ll see that the results are not prefect. So for the time being it looks like good programmers can still write better code than the AI. How long that will last is anyone’s guess. If history is any guide, it will not last long.
I remember when optimizing compilers started generating more efficient than the world’s best assembly language programmers could write. It was painful for some and a real boon for others. It didn’t completely do away with the need for assembly language programmers but it did reduce the need.
What do we tell students who ask “what’s the point of learning to code?” My thought is that we talk about the need for human oversight of AI generated code. We need to verify that it works as we want it to work and that means we need to understand code. We’re also going to need to fine tune generated code for some time to come. Understanding code will also help write good instructions for the AI that generates code. Again, understanding how code works is important for that.
Of course, there is a lot more to computer science than just writing code. Programming languages are the language of that study. Learning assembly language still helps people understand how computers and computing works. So will learning higher level languages.
The AIs will get better. Our conversations with students will get harder. Cheating is always going to be a challenge. We live in interesting times.
3 comments:
Students writing their own code has been an enigma in CS at best. How many times have we gone on record as wanting to promote collaboration and, at the same time, expect everyone to learn everything? AI could become the great collaborator!
My way of trying to address this is to sit next to a student when looking at their program and asking them what a particular section of code actually does. If they're able to do so, it seems to me that the goal has been met. You quickly can determine who is a copier/paster with no idea of what's happening.
My concern about AI writing code is in the bigger picture. Great programmers have always done the innovative; writing things that have never existed before. Can AI do that? And, can AI debug its own code?
There's no doubt, in my mind, that this is going to happen. We'll live through growing pains but hopefully the world comes out better as a result.
Thanks for raising the issue, Alfred.
I am a tech consultant (not a developer). I use copilot to develop the scripts I write and it works really, really well most of the time. I determine the logic, copilot writes the code, I tweak/review and test. I'd vote for modeling the real world. It kinda reminds me of banning calculators from tests. In the real world you always have a calculator.
I taught CS for a decade. I think if I was still teaching, I'd (continue to) focus on the design/logic part of it.
It (?) may write code, but also proof something about the written code?
Post a Comment