Fair warning, this is a post in two parts. First a project idea and second musings on the tools I used to create it.
I really do like to write code for fun. Nothing complicated (been there, done that, got the T-shirt - literally) but just little things to "scratch an itch" as they say.
Lately as I played Wordle I was wondering which letters appeared most in each place in the five letter words in my word list. A couple of nights ago, I wrote some code to find out. I had my code output a comma delimited file so I could use Excel to look at the results. That’s what the image to the side shows.
Now this sort of thing is highly dependent on the word list of course. But for my list, S is the most common letter in the first and fifth location. Not surprising as S is used to make plurals. Wordle doesn’t use plurals so I note that the second most common fifth letter is E with Y a close second.
The letter A is the most common second and third letter. The letter E is the most common fourth letter.
If I were ambitious, I could probably use this information to make a smarter Wordle solver. I’m not quite that ambitious though. I am toying with gathering some other statistics though.
I develop using Visual Studio – the full blown version. That means that Copilot jumps in to help. That’s not something I anticipated when I started but I confess that I found it surprisingly helpful. I did specifically ask Copilot to write one specific method – generate a string array of two character combinations – but it jumped in on its own with a couple of small bits of code. I was surprised at how well it anticipated what I wanted.
The implications for teaching programming are something to think about. On one hand it’s scary that AI tools can so easily write coding solutions to simple programming assignments. That turns our process of evaluating learning on its head a bit. At the same time, I am not ready to blindly trust AI generated code. I do not want students to blindly trust it either. So asking students to test generated code seems like a reasonable thing to assign. Yes, I suppose some students will ask AI to generate test cases but if we can’t trust AI to write the code in the first place we can’t trust the generated test cases.
We could ask students to explain the test and related tests. Could be quite a recursive rat hole.
We can also ask students to explain the generated code. We should probably ask them to do that either verbally or by writing manual in class so they can’t ask AI to do it for them.
What I keep coming back to in my own thinking is a focus on abstraction and top down design. Can we ask students to break the problem down to component parts and have them prompt the AI to implement various methods and code pieces. A focus on design rather than writing code. We could have students submit the design document and the various prompts that they use. Add to that some serious examination of testing and verification.
Students are going to have to work with artificial intelligence. They can’t let it do all the work for them because AI is not I enough yet. I don’t think it ever will be either.
No comments:
Post a Comment