Friday, January 30, 2026

CS Teacher Improvement Through Observation

I remember the first time I was observed by a principal. Brilliant man with two masters degrees and ABD PhD. He told me that he didn't understand much of what I was teaching but the students seemed to be getting it and the class ran smoothly. Not much in there to help me improve.

I believe that teaching CS is different from teaching most subjects. But each subject probably has its own nuances. That's why I think that teachers need specific training in teaching their particular subject. I know that there are MS degree programs in teaching reading and, I think, math. Probably more than those as well

There is limited training in how to teach CS though. There are some degree and certificate programs in teaching CS. As states increasingly require certification to teach computer science there will be more I am sure. Most CS teachers have to figure it out on their own though.

I think we have a lot to learn about how to teach CS well. There are a few people doing research in CS education. A lot of it gets disseminated at SIGCSE which can be hard for K-12 CS teachers to attend. That is both because of cost and because it happens during the school year. A lot of teachers have very limited options for missing school days. If nothing else it is a lot of work to create good sub plans!

Many teachers are resistant to sessions that are research based. That is often because they have had too many professional development sessions that year after year replace the previous research based methods without giving any one method a fair chance. Or worse, having failed.

It would be nice is teachers had more opportunity to observe experience CS teachers teach. (Both Mark Guzdial and Mike Zamansky have blogged about that recently – blog post links below) BTW if you ever get a chance to hear Mark Guzdial present I recommend that you do. Especially if the topic is how to teach.

In an ideal world, CS teachers would get to observe teachers in the building where they teach. For a variety of reasons, not the least of which is that many K-12 CS teachers are the only CS teacher in the building, that is often not possible.

CS conferences are a mixed bag. Yes, there are some great presenters. Many of them do try to model good teaching practice. There are not a lot of talks on how to teach though. I gave one at CSTA Online six years ago. (How is it that long ago?) It was well received but we could use a lot more that talk about and modeled how to teach CS.

I think we could use more talk sessions on the conference “hallway track” that informal, unscheduled time when teachers find themselves sharing ideas with like minded people.

At the heart of the issue is that teachers have to be about constant improvement. There is a difference between five years of experience and one year of experience five times.

Anyway, please read the posts linked below. Smarter people than me.

Wednesday, January 28, 2026

Are AI Code Assistants Getting Better or Worse

A friend of mine sent me a link to an opinion piece in the IEEE Spectrum - AI Coding Assistants Are Getting Worse –> Newer models are more prone to silent but deadly failure modes

Are AI code generators getting worse? The tl;dr  in this article is “Yes” because companies are letting poor programmers train the AI. You should read the article though.

It’s not deliberate of course. It’s just the way the internet works. AI software is not checking to see if the information it is getting is good in absolute terms. It is just checking to see if the user is happy. In the user is happy because they don’t realize that what they have is bad how is the AI to know?

The term GIGO - Garbage In, Garbage Out may not be repeated as often as it used to be but it is still true! We have to be careful about who and how artificial intelligence is trained. Do an internet search for “Chatbot goes bad” sometime and you’ll find a large number of cases where AI chatbots have been trained badly. Sometimes trained maliciously. Sometimes just trained on poor data sets.

TO me this trend points out a couple of things that we need to teach beginners. In the words of Ronald Regan, “Trust but verify.” Students need to test their code. Students need to be able to read and understand code. Programmers have to be able to determine if AI it taking shortcuts like leaving out error handling, data validation, and other errors of omission.

We also need to prepare students to think about how AIs are being trained so that they learn how to train AIs well themselves. Even if coding is dead, as one of my former students claims, people will still have to train AI, ask AI good questions, and be able to understand if they are getting the value from AI that they want, need, and think they are getting.

Monday, January 26, 2026

RotWords–String Manipulation Project

BlueSky is the microblogging site for me these days. That is where I am getting ideas and information about teaching computer science among other things. I recently saw the following message.

It’s an obvious possible coding project in my eyes.

  1. Read a word from a wordlist
  2. Remove the last letter and place in in the front of the word
  3. Determine if the new string matches an actual word.
  4. Display both old and new word, if found
  5. Repeat

It’s probably easy coded by an AI of course though I suspect students might come up with interesting implementations on their own as well.

As was pointed out in replies on BlueSky things get more interesting if they lead to a discussion about the nature of words. For example, a lot of words that end in “S” and plurals of words. Is there a way to strip plurals from a data set programmatically? (I’ve been thinking about that for my Wordle solver program as Wordle doesn’t use plurals.)

And what is the usefulness of word lists if they have words that are not really words? Or that are not in common use?

We don’t tend to talk about data integrity, data validity/validation, normalization of data, or any kind of data checking all in K-12 CS classes. We probably should discuss it though. A project like this might be useful in getting that conversation going. Just a thought.

Saturday, January 24, 2026

Dice As a Design Problem

The other day I ran into an interesting programming exercise on BlueSky.

The project description is at 2D Dice Grid Scoring Algorithm - 101 Computing It’s a cool project. I decided to code up a solution myself. Now there is sample starter code at that link in Python. I do my fun programming in C# so I started from scratch.

The first thing I had to do was to think about a Die class. I’ve written classes for dice projects many times before. It was a favorite item for me to use when teaching students about designing classes. Just about everyone is familiar with dice. I also brought in some samples to use as visual aids. I had some binary dice with only ones and zeros and some role playing dice in a variety of shapes and numbers of sides.

Students generally come up with the idea that they need to have a face value for the die. They generally also easily come up with the need to display that value and methods to change it to a random value. What they don’t always remember right away is that no all dice have six sides. Some dice have many more than six sides. Eventually they come up with two sided dice which we sometimes call coins.

I had a couple of example Die classes from other projects but I decided I wanted to be a bit more visual. So I created an object with the ability to display images. For this particular project I also added an extra method. I added a method to return if the face value was even – a Boolean value – true for even, false for odd. You know, just to make things interesting. Right now it is a method but I want to change it to a property to avoid unneeded parentheses.  I am not a fan of parentheses.

I did cheat a little. I had Copilot create some of the initial work on the code. Copilot, like my students, assumed a six sided die with values from one to six. I didn’t specify much so that’s understandable. It’s not really satisfying for me though so I will be putting some extra work into things to make the class more flexible. I will add constructors that let a program use different images and numbers of images. After all, just as not all die have six sides not all die have numbers or pips on them.

What would/do you add to die objects to make them more interesting or useful?

My project looks like this BTW.

Monday, January 19, 2026

Funding for CS Educational Tools

Mark Guzdial posted a link to an interview with Jens Mönig. Jens is the main person behind Snap! which developed out of Scratch (which Jens worked on). It’s a great interview and I recommend it. The story of Snap! is an interesting one. I think it is great that SAP is funding the team behind it. This blog post, which sort of rambles a bit (sorry) was inspired by that interview.

There are basically two and a half ways that software for teaching programming and computer science are funded. One is research funding. Usually by universities but sometimes by research groups that are part of major companies. The later is the half I refer to. The other is commercial products. I.e.. products that actually make money for companies.

The problem with commercial products is that they are really designed for professional software developers. That means a number of things that are great for professionals but harder for beginners. Complexity is one of those issues. Visual Studio, which I use for my own development and used for years in the classroom, using a number of different files for every project for example. That’s just the beginning. Development on professional tools adds features for professionals but often subtracts features that are helpful for beginners. I first ran into this when Visual Basic became Visual Basic .NET and arrays of controls when from intuitive to complex with extra code necessary.

Commercial software often has free versions which is the only way schools can generally afford to use them. Simple versions that work on a school’s limited resources tend to go away over time though. They don’t pay for themselves.

I have seen other cool tools from commercial tools, or tools commercial companies provided for free, disappear over the years. Corporate research projects generally last while the principle investigator remains interested and can keep getting funding. If the research doesn’t wind up in a commercial product that doesn’t help with funding.

App Inventor is an exception. Originally developed at Google, App Inventor had an academic sponsor (It resides at MIT these day) and Google provided some seed money to get the open source version started. It phased easily from corporate research to university research.

MakeCode (largely a Microsoft Research project)  is still going strong. It appears that industry/ academic cooperation is helping keep that going. That combination seems to be key in keeping some projects going.

University research projects tend to last longer than corporate research projects. As long as someone can get grants, usually tied to graduate students coming up with good research topics involving the tool, they keep going. I wonder how well some these will continue when the principle academics lose interest, retire, or pass away. Some projects have depth of involvement which is helpful.

 Alice out of Carnegie Mellon has been going strong for 30 years even though it’s originator, the great Randy Pausch passed away in 2008.  External funding, required for most academic tools has stayed strong for Alice. That takes a lot of work to maintain of course.

Most of the long lasting tools have some level of corporate sponsorship. Oracle helping with Greenfoot and BlueJ are other examples.  There used to be a lot of NSF (US National Science Foundation) money around. Somehow I suspect there is a lot less of it these days. It’s risky to depend on it as well given the rapidly shifting state of US Federal funding.

And then there is Artificial Intelligence to think about. That’s sort of the elephant in the living room these days. If funding agencies (government, non-profit, industry) decide that coding is dead because of AI what happens to funding for the tools educators are using today?

I don’t believe that coding is dead but I know that some people have decided that  it either is or soon will be. Computer science education is going through a change caused by the winds of AI. Industry seems to think that they don’t need inexperienced software developers. Development of developers has to start somewhere though. One can’t go from zero to experiences expert without starting somewhere.

I believe we need good teaching software. I hope we can keep seeing good things supported and developed in the future. We live in interesting times.

Note that Mike Zamansky wrote a riff on this post. Recommended at Funding for CS Educational tools - C’est la Z

Friday, January 09, 2026

Binary Math–Subtracting by Adding

Some of my readers who have been teaching Advanced Placement Computer Science (APCS) will remember the BigInt case study. It was a case study involving mathematics using large (very large) integers. As released by the College Board it supported adding, subtracting, and multiplying large integers. You will notice that division was not included. In fact, asking students to implement division was part of the exam.

BigInt introduced the idea that multiplication was actually multiple addition. By extension, students were to figure out that division is multiple subtraction.

Computer science really requires understanding how mathematics works at a deep level. It becomes obvious (one would hope) when trying to understand how Binary, Octal, and Hexadecimal work. We don’t often spend much if any time trying to understand subtraction though.

Recently, on BlueSky I can across a message by Andrew Virnuls linking to a blog post titled Two's Complement and Negative Binary Numbers that explains subtracting by adding negative numbers.

Let me draw the two previous notes in this post together with some history of mine. Back in my university days I worked on a course connecting some test hardware to a computer. The computer was a Digital Equipment PDP-8. Now the 8 was an interesting machine. It didn’t have a hard drive and it was programmed in assembly language entered in Binary. Where as most computers we use today use hexadecimal representation (base 16) the PDP-8 used Octal (base 8). The word size was 12 bits. Not 64, 32, or even 16 – 12.

This word size places some limits and one of those limits was the number of machine language/ Assembly language instructions. There was no multiply, divide or even subtraction instruction. We had to write code to do those things similar to how code was written in BigInt for those operations. We also had to write code to do subtraction. There was an instruction to create the two’s compliment of a number though. That was handy. So we wrote code to find and use the two’s compliment of a number in order to do subtraction.

We used the subtraction routine to implement division. Though to be honest, we tried to avoid having to do multiplication or division in our project to keep performance reasonable.

I think we’re all glad that today’s computers have a lot more layers of abstraction than the PDP-8 had! Of course, and a lot of students do not realize this, most powerful assembly language instructions are actually the result of what is called microcode that works transparently behind the scenes.

We keep moving up the path of abstraction. Hal Berenson addressed this recently in a post called 98% of Developers can’t program a computer which is actually a bit of a success story including how artificial intelligence is helping with higher levels of abstraction.