Monday, March 30, 2026

Talking Artificial Intelligence With Richard Crane from MILL5

Trying something new to me. I’ve been really interested in AI recently and when my friend, Rich, approached me about sharing his thoughts about AI I jumped at the chance. Richard Crane and I chatted for a while about artificial intelligence. What follows is a transcript of that conversation.

Introduction

Alfred Thompson:

Welcome, everyone. Today I’m joined by Richard Crane, Founder, CTO, and Chief AI Officer of MILL5. Richard and I go way back—we’ve known each other since 2003 when we were colleagues at Microsoft.

While Richard has been deeply embedded in the AI space for over a decade, the rest of the world is just now catching up to the shift he has been helping to lead—moving AI from a simple developer tool to a complete reimagining of how we build and solve problems.

I’ve invited him here to discuss the new reality for developers, the changing landscape of computer science education, and how experienced engineers are evolving into AI orchestrators.

Richard, thanks for joining me.

Background and Relationship

Richard Crane: Alfred, no problem. I’ve been looking forward to this. You and I go way back—from our time at Microsoft. And later on, you taught computer science in high school, and it just so happens you taught both my kids. So this is an amazing opportunity for us. Thank you.

Alfred Thompson: You’re very welcome.

Balancing Roles: Building and Documenting AI

Alfred Thompson: You are the Founder, CTO, and Chief AI Officer of MILL5—and also the host of the Inventing Fire with AI podcast. How do these roles—building the technology and documenting its evolution—inform each other?

Richard Crane: That’s a great question. Let me start by saying we formed MILL5 about ten years ago. At the time, I actually held every title—CEO, CFO, CTO. There was no “Chief AI Officer” title back then, but we were already doing AI.

My business partner, Sri Bhupathi, has been doing incredible work in AI for years. And AI encompasses far more than what people see today with tools like ChatGPT, Claude, or Grok. It includes natural language processing, computer vision, machine learning, and many other technologies—and we’ve been working across all of them for a long time.

To your question—how do building and documenting inform each other? I’ve always considered myself a doer. I learn best by doing. Talking about what I’ve built helps me articulate and reinforce what I’ve learned.

AI is evolving so rapidly—there’s something new every day. Right now, I’m deeply focused on knowledge distillation and trying to reach a very high level of expertise in that area.

So the process goes hand in hand:

- Building helps us serve our customers at MILL5

- Documenting helps share that knowledge with the broader community

Early AI Adoption and the “Light Bulb” Moment

Alfred Thompson: You’ve been working with AI for over 10 years—long before the current hype cycle. What was the moment when you realized AI wasn’t just another tool, but a fundamental shift in how we build?

Richard Crane: A few things come to mind. First, we’ve been doing AI since day one at MILL5. One thing about our team is that we constantly push ourselves to the cutting edge. We always aim to understand what’s coming before everyone else.

If you go back 10 years, people might say, “You weren’t doing AI back then.” But we absolutely were. In fact, one of our AI solutions for Olympus was featured in a Microsoft Build keynote in 2019.

I remember sitting in the audience while it was being presented, and suddenly my phone started blowing up—friends and colleagues asking, “Is that you guys?” And I said, “Yes, that’s us.”

As for the “light bulb” moment—it wasn’t a single instant. It was a series of realizations.

One key moment was in January 2019 during our global company meeting. At that time, we had grown from two people in a room to a company operating in seven countries.

I pulled up our company-wide slide deck and made a very clear statement:

“Everyone in the company needs to know AI. Period.”

Since then, AI has been a core part of every company discussion, multiple times a year.

So in 2019 I knew it, but there are many more light bulb moments, right? And I'll just give you one more and three years ago or 2 1/2 almost three years ago when people were like, I can use ChatGPT or I could do this and they were they're like maybe I could use it for coding. And there were some aspects of coding where it could generate code, I can honestly say the light bulb moment, a major light bulb moment for me, was in January of this year where I'm a seasoned software engineer, one that I would consider. Yeah, I try to be humble, but I'm I feel like I'm top 1% on the planet. That's what people keep telling me. And in January of this year, I'm able to build full-fledged systems by myself.

And I want to enable that for every single developer I have on my team. And so that's a light bulb moment as well where things have shifted big time and they keep shifting. I expect another shift sometime over the summer and another shift sometime in October, November time frame. So we're going to see a lot more light bulb moments.

AI Creativity and Human Advantage

Richard Crane: One interesting observation from my current work: I’m building different applications—like a financial operations cost analyzer for the cloud—and while they are very different, they often end up looking surprisingly similar in style and structure.

When I want creativity, I’ll tell the AI: “Go wild. Be creative.” But often, it doesn’t vary much.

So how do I get variation? I switch models. I might move from Claude to ChatGPT, or to Gemini, or another tool entirely.

That’s how I introduce diversity into the output.

At the end of the day, though, I still believe humans have the edge in creativity—by a wide margin.

Alfred Thompson So you are incredibly productive using AI. How much of that is due to your prior experience in software development? Can someone without a solid technical foundation ever truly close the gap, or does technical debt? Eventually catch up with them.


Rich Crane - This one is so hard for me to even talk about. I get parents call me up who whose kids are in computer science in school now or just graduated and they said Is my child not going to have a job?
And about two years ago, I would say as long as they learn AI, they're going to be fine.

These days I don't know if that's the case. In fact, I don't think it is. I hired 2 interns last, not this past January, but the January before, and I set them loose. I had a laundry list of my projects that I always wanted to do and never got done.
And I set them loose on  one of them. And I remember I asked them to do something. I mentored them. I told them certain things. I educated them and I said, hey, your programmers, they were from a local university. I won't say which one.
But they I set them loose to develop it as well as gave them access to every AI I had and I have access to all of them and three weeks later.
They sort of got something done. It was good, but not great, and it didn't work exactly the way I wanted. I would probably use the word janky. That's a fun term that I hear sometimes in software development.
And I was like, I couldn't ship it, right? In fact, I in fact, I don't think I can really fully use it. And I let them do their thing.
But then I took those same requirements and then I went out AI with it.
I got it done in two hours.
So 2 interns, 3 weeks.
It didn't meet the need, didn't get exactly what I wanted. But in two hours, because I am somebody that knows what they're doing, I have 30 plus years of software experience in startups, Microsoft, my own companies because I have that experience.
I can direct AI the way I want. And I get it done, right? I'm working on OS right now that would have taken two years and I'm getting them done in seven days or less.
Right, so the answer is it's really important to have prior experience, right? The the senior principal architects and developers. In fact, Scott Hanselman was. I just saw an episode with him. You remember Scott, right?

Alfred Thompson Oh, yeah.

Rich Crane So Scott just said that he's tired. Why? Because he has that young energy to go develop and do things, but he's not the same young guy he was trying to take on the world with software development. He's, but he has this fuel to do more because of AI. But his physical ability and his age and his, he's a fit guy, don't get me wrong. But still he's not the same young guy.

It's a weird time, and I do agree that prior experience is a big thing. It's a it's an important thing, so.

Alfred Thompson - So if the if the traditional entry-level programming job is drying up, as it seems to be, how would a student today prove their value? Should they still be doing like leet code? Should they be focused on building autonomous AI agents?


Rich Crane - You got to put yourself out there, right? You got to put yourself out there, right?


Alfred Thompson - What should they be doing?


Rich Crane - I think there's value in learning everything and anything with software development. Imagine if you had a Agentex system. Like right now I have Openclaw and I have 7 agents running on the thing. I have a scout that's looking for ideas. I have an assistant that's helping me coordinate those ideas. I have a developer and an architect agent, and those two are responsible for building the plan, building the spec, and then implementing it, right? There's a couple others, but those things are not going to be any good if you, the person, are able to express things.
In fact, I had this conversation last night at a local restaurant with a guy and he was like, oh, you don't need to know anything anymore. And I'm like, that's not true.
I said you can express something and yes, you will get something out of it will be exactly what you want. Will it perform? Will it scale? Will it function correctly?
Will there be little things here and there? Anybody I know can vibe code an app these days and get it the look and feel down. But when I look at it, I'm like, oh, what did AI just do? There's something wrong right there. And then I ask AI, I said, what's going on there? There's some problem there. And it's like, oh, I'm doing this. And usually my response is, why the heck are you doing that, right?
And I said, but then here's the follow on. Why are you not doing this?
And the thing I love about it, AI says you're absolutely right. I should be doing that. And I sit back and I just like, yeah, I know I'm right. But that's the problem. People don't know what they don't know. And that's a that's a big gap.

AI and CS Education

Alfred Thompson - OK, so my focus has been on CS education. What's the new math of computer science? Do we still need to teach manual memory management and data structures if an A I can handle them in seconds?

Rich Crane - I think it's needed. It's not that they can't handle them in seconds, it's that. And this, right? There's one data structure and another data structure. Like let's take an array versus a linked list, right? Every time you have to allocate a new slot in array, you have to destroy the array, recreate another one  copy from the original array into the new one. it's from a memory management perspective and a performance perspective. It's a nightmare. What you don't want is a I just to say, hey, I think I should use an array for everything. Versus, say, some other data structure like a hash table or a link list or dictionary or something, right? And AI knows about all those things. But quite frankly, I feel like AI sometimes is the junior engineer, very capable junior engineer, meaning it knows everything, but it doesn't know exactly what it should do.

Alfred Thompson - At this point, are we teaching students how to be pilots of these AIs, or can we still be teaching them how to build the engine? Which skill is more valuable in 2026?

Rich Crane - Well, in 2026, if how to build the engine, you're gonna, you're gonna do extremely well, right? The challenge is I think. If how to pilot AI systems in the future, that's where it's going to go. We're not going to get there this year or next year as the like the main focus.
Unfortunately, the problem is building the engine is going to be a lost knowledge, right? People are not going to know how to do those things, and unfortunately this happens with a lot of different industries. I'm trying to think of one where it's like, hey, in this modern age there are things we did 100 years ago that we don't do today, and writing code is one of them, right? In 100 years we will absolutely not be writing code, but with AI being so fast.

And the evolution and innovation around it, it's just a matter of time. The question is how long.

Yeah, well, I know your your passion for education. you've been doing technology and education for so long, right?

Alfred Thompson - Yeah, I have. And that's  kind of where my passion is right now. And one of the things I've always pushed is I've always told students that I want them to be creative. A teacher that I really respect once said that if you get 23 student projects that all look alike, you gave them a recipe. you didn't give them a project. You didn't. You  didn't get what you really wanted to get out of it. You really want to see 23 projects that all look different.


Rich Crane - That's true. In fact, I always say that when you give people a task in software engineering, they in fact, what? This is a great thing,.
If you think about it with AI, you give a task to a bunch of your employees, they say go build this. You're going to get all different answers. Why? Because there's hundreds of ways to do that same task in software engineering. Will the AI come up with 500 ways?

A Look Into The Future


Alfred Thompson - All right, so let's look a little bit into the future. We are moving past the chatting with the box. In five years, will the title software engineer even exist or will we all be product architects?


Rich Crane - I think the software engineer will exist. I think it will be there for those engineers that take learning as an art, as a craftsmanship, right? That likes to learn everything there is. But they'll be doing so much more. we used to see like developers and engineers, they'll focus on one thing. In fact, we were talking about this on a team I'm working on right now. We're doing all Agentex development just across the board. In a production product, no less like like a crazy production product. It's like, hey, one of the guys has a specialty in UI. Guess what? When I when I brought it on the project, I was like, you're no longer just the UI guy, you're also the DevOps guy and the database guy and the mobile app development guy, right? And the front end web guy, right? And the API guy, right?

Of course you're probably going to specialize in certain areas, but you're doing everything. So I think what's going to be interesting is yes, the word software engineer is  going to stay around, but those software engineers that remain.  Are going to be doing everything now product architects? Absolutely. In fact, the same restaurant I was at last night, I was sitting at the bar with one of my employees and he's one of my top guys. And I had my laptop with me and I opened up the laptop and we were talking about an accelerator that we were building for some of our customers and we were of course Vibe coding it and everything else.
And we were looking at it and. It's a pretty extensive accelerator tool, right? It's not going to be a product of ours, it's  more to help our customers. And my developer looked or my, engineer developer employee looked at me and said how much line, how many lines of code did you write on this? I said zero, right? And we were talking about a feature that my business partner wanted in in this accelerator. And I said, hey, here's this product spec that I created for this feature. I just pasted the spec in. I pasted an image right? I said I want it to look like this and I just drew it right? So I pasted this image, pasted this spec. It got it done in like 8 minutes.

And then my engineer was like, whoa. I was like, yeah, he goes, well, I don't like this, right? And he goes, what if we had this? And there was a graph. It was a dynamic graph. And you probably know this, right? Developing graphs. It's not easy and it's sometimes suck and but AI just did it and generated it. It was very cool from  the first moment we saw it, but we made it cooler. We didn't write a line of code. All we expressed was Can you do this? We would like this, right? We think this would be cool. And it, what it says? It's very, I forget the term. What was it? Sycophantic, right? It's very agreeable. It says, well, that's a great idea. I'll get right on it.

And literally 2 minutes later, not even not even 2 minutes. In fact, I think it was like 30 seconds, but I we were at the bar talking, so I don't know if it was like 30 seconds or two minutes, but it was definitely 2 minutes or less. It did it. And we're like, take it back because it wasn't a simple ask, right? And I was just like, wow. So there is going to be a lot of product architects, a lot fewer software engineers, but a lot more capable software engineers, meaning they're going to the knowledge that a software Engineer is gonna have in this next wave is gonna be so much. even with my AI and agentic coding and vibe coding and things like that, there are topics that I know now that I didn't know  five years ago.

Alfred Thompson - OK, so one last wrap it up. As models become more capable, what is the one human skill that you believe will remain AI proof?

  
Rich Crane - Well, we'll have to preface this, so  anybody that's creative. Art centric. Anybody that's trade oriented, right? Like the same developer that's sitting next to me who's one of the best developers know he get he will get to the point where he does assembly code and looks at ones and zeros to figure out bugs. And he had to do it because of the one of the projects he's on and he's like, should I just become a plumber?
And I'm like, I don't know. Do you want? I'm thinking he does a lot of really great home, home repairs and do-it-yourself projects and things like that. I'm like, the plumbers are making a lot of money right now. So are the electricians and the construction people and things like that. They're all building AI data centers.

But at the same time, the people that are developing art, painting, music, right? Google released something recently to generate music with AI, and it's good. It's OK.
I don't think it's great, but you can see the clear difference between AI developing that and an actual human being. So I think it's going to allow us to do. Be more creative, more, focus on our art and those type of capabilities. In fact, somebody said a long time ago, I don't want AI to generate an image for me. I wanted to do my dishes, right?

I still think to this day there is a distinct difference between AI doing those type of work, right?

So yeah, no, I'll tell you, it's there's a lot of things to come in and software engineering isn't going away, that's for sure. It's definitely changing though and one of the things I think is kind of interesting is how do you not lose?
I think there's going to be scenarios where others aren't teaching those things anymore and because they don't get taught, we'll eventually lose. So how do you get that?
And I'm going to preface this with one other thing is People worry about like, am I, am I going to lose my job? In fact, I just had that conversation not even an hour ago, right? And it was, it was a role that wasn't technical and that person was like, hey, I might lose my job and I'm like, why?

Right. Somebody, I think it was Matthew Berman, right. He's one of the AI podcast influencers out there. He automated one of his employees tasks. He didn't intend to. He was just trying to automate his business and in doing that he took like 90% of the work that that person was doing away. But what it did is freed that person up to do so many more things. And because that work that he was doing was trivial, menial, minutiae and just tedious. AI could do it faster and better than the human, and now the human could go work on creative stuff and big things and all that stuff. And in the process he's doing 10 times more work than he did before because of AI.
So we're going to see a productivity revolution that is just happening. And it's funny, I was thinking about my employees and I was thinking about me.   
 
I have a company that isn't a bunch of software engineers and things like that.  Every person is their own software development team. So I have a company of hundreds of software development teams where one person stood before as a software engineer, their entire team unto themselves, right? And that's the way I look at it..

Closing Remarks

Alfred Thompson - Thank you. Appreciate your time. A lot to digest.

Richard Crane:  Alfred, thank you so much. This has been a pleasure.

Tuesday, March 03, 2026

Computer Programming or Software Development

My friend Pat Yongpradit has a post on LinkedIn that got me thinking. It starts with a key statement “Computer programming (coding) is not equal to software development.” Now I tend to think of those as similar if not identical but Pat points out that “Computer programmers and software developers are codified differently in the BLS data” BLS is the US Bureau of Labor Statistics BTW.

Interesting. So what is the difference? Computer programmers write code. The BLS describes computer programmers:

Computer programmers write, modify, and test code and scripts that allow computer software and applications to function properly.

Software Developers do more. The BLS describes software developers as follows:

Research, design, and develop computer and network software or specialized utility programs. Analyze user needs and develop software solutions, applying principles and techniques of computer science, engineering, and mathematical analysis. Update software or enhance existing software capabilities. May work with computer hardware engineers to integrate hardware and software systems, and develop specifications and performance requirements. May maintain databases within an application area, working individually or coordinating database development as part of a team.

‘A lot more words in that second job description. The BLS projects growth in the need for software developers and a decline in the need for computer programmers. I’m not so optimistic. My read on many of the layoffs in tech companies appear to me to be more about declining numbers of software developers. I could be wrong and maybe there are/were a lot more people just doing computer programming than I think. The industry keeps changing.

In my very first software jobs, back in the late 1970s, I would characterize my work under the software development description. While I did do some programming from specifications and design documents written by others (computer programming) I rapidly moved into meeting with users, analyzing needs, and designing and developing software and utility programs. Job titles may have been different but that was the work.

What may happen is that software development involves less coding than it has in the past because of AI. At least coding by humans. So BLS is probably right about a decline in the need for computer programmers. At the same time, if software developers spend less time doing actual coding they may have more time for higher level (if that is the right term) thinking and involvement in design. Unless AI starts doing more of that. So maybe we will not need more of them. Or perhaps AI will make it possible for more people to be software developers who wouldn’t be that now. We’ll see I guess.

My undergraduate degree is in Systems. One of the goals of the program was to train people to interface between most people and computer systems. In other words, to understand the needs that people/businesses have and translate it into what computer programmers need to know to write software. For a long time, that sort of work involved two sides and sometimes three. That is to say, sometimes there was a user/client, and analyst, and a programmers. Sometimes the latter two roles were one person.

Knowing how to write code was always essential because code is the language of computer science. Not knowing how to code was seriously limiting for someone trying to design software. I think that is always going to be the case at some level. So I think software developers, even those who prompt AIs, will always need to know some coding. More than just coding though, I think that students, anyone who is going to interact with computers and that incudes, of course, software developers, needs to have a background in computer science.

Computer science is not just coding but having an understanding of how computers work. What is computer logic? What is computational anyway? AIs have a lot to learn and people with a computer science understanding are who AI is going to learn from. We need to think of K-12 computer science as computer science – foundational ideas and concepts – and not just a class in how to write code. We need to prepare people to be software developers not computer programmers.

Mike Zamansky has a couple of recent posts on why CS still matters in schools that I think are worth a read:

Interested in seeing what the BLS thinks of employment changes because of AI? Check out Incorporating AI impacts in BLS employment projections: occupational case studies

Sunday, March 01, 2026

Selling AI Before It’s Time

Artificial Intelligence has been big in the news the last few days. A lot of the talk has been about the Trump administration designating Anthropic a supply chain risk. The US  Department of Defense (its official legal name) was unable to agree to contract terms with Anthropic. You can read Anthropic’s statement here. Statement on the comments from Secretary of War Pete Hegseth

There are apparently two sticking points. 

The use of Anthropic’s AI model, Claude:for:

  • the mass domestic surveillance of Americans
  • fully autonomous weapons.

The first on general principal. The second because Anthropic does not believe that AI is ready for handling fully autonomous weapons. I’m surprised (OK not really) that the first is an issue because the DoD says that using it for mass domestic surveillance would be illegal (probably true) and that they would not do it. Well, some of us remember the CIA snarfing up data on Americans by getting data from overseas so I can see why Anthropic might want more assurances than “trust me.”

The fully autonomous weapon control is potentially even more concerning. Anthropic doesn’t believe their AI is ready for that. I wonder if it ever will be ready. There are reports that OpenAI’s tools took part in mission planning for the recent strikes against Iran. There are also credible reports that those attacks hit a school and killed over 80 school children.  Did AI pick the targets alone? Was there human oversite? I have no idea but clearly things were missed. At least I hope they were missed. I’d hate to think that event was intentional. Dare we let AI make these decisions?

There have been some studies of AI used in war games. These studies have resulted in headlines like “AI simulations constantly opting for nuclear strikes, terrifying study shows” AI models do not have human sensibilities or share human ideas of going too far. Apparently, these AI tools have not been trained to follow Asimov's Three  Rules of Robotics. I wonder if the people developing AI today are aware of them. I doubt that many government officials are. Nor do they really understand the risks of AI controlling weapons.. No one really does but if the developers behind a tool say it isn’t ready perhaps we should believe them!

I was reminded of the old Paul Masson advertisements where Orson Wells would dramatically declare “We will sell no wine before its time.” The point was not to rush things and to let the process complete until the wine was completely ready. It appears that some people are pushing AI in places where AI is not ready to perform adequately. That is very unlikely to give a good result.

Monday, February 23, 2026

Who Is Driving Changes to Computer Science Education

There are a lot of Changes happening at code dot org The Slashdot article linked there lists several of them. While the changes include a number of people changes including President Cameron Wilson stepping aside, Chief Academic Officer Pat Yongpradit leaving to join Microsoft, and some staff layoffs the change in direction, to AI, may be the most concerning. From Hour of Code to Hour of AI? Some interesting comments follow that post.

The questions top of my mind are "who is driving the direction of CS education" and "is CS education moving in the right direction?" A lot of people believe that industry is pushing CS education in the direction of being vocational. The new focus on Artificial Intelligence often feels like a vocational direction.

My involvement with computer science education predates code.org and even CSTA so I have seen a lot of changes. In my first teaching days computer science teachers were pretty isolated. There was SIGCSE which accepted K12 teachers though welcomed sometimes felt like aspirational rather than actual. ACM, of which SIGCSE was and is still a part, was doing some support for CS education. Cameron Wilson was a huge part of that and worked policy.

CSTA was developed by some wonderful people in and around ACM. This started the real movement towards expanding K12 CS education. CSTA helped train and organize teachers to push for more more CS education. Code,org came a bit later and brought something new to the effort.

Code.org brought money and industrial production values. From the first set of videos that went viral to some very good curriculum resources as well as connections to industry and political leaders. Getting policymakers to push for CS education stepped up.

We’ve come a long way.

Coming back to my earlier questions. Is industry driving the directions that CS education is moving? A lot of people think they are. Industry has money and it has funded a lot of the work by code.org and CSTA. The modern Golden Rule is that the people with the gold make the rules after all.

Industry has some motivation here. I spent a few years working at Microsoft myself where my job was to promote the use of Microsoft tools for teaching. I didn’t get much in the direction of what to teach. I always felt that teachers should decide what to teach and I just wanted to help teachers find ways to use tools to teach those concepts. Teaching computer science as vocation was always there though. Senior mangers often told me that industry needed more people to know CS because there were jobs that needed to be filled.

CS as vocation has always been a selling point for CS education of course. It’s what helped sell school boards and other elected officials. Among teachers that was usually a secondary motivation. For a lot of teachers, including me over time, CS education became more about understanding how the world works. We don;t teach physics because we want to make more physicist. We teach it so that students understand the world around them.

People who are not working for tech companies often have to use computers and make decisions about computing. From spreadsheets to databases to internet searches. And now AI. People in all walks of life use computers. Understanding computer science can make those people more efficient. Computers are an important part of our world.

It seems like all the big tech companies are betting huge sums of money on AI. There is a lot of pressure to move the direction of CS education into AI. Is the industry push vocational in intent? Is is all about helping these companies to make money? CSTA and code.org are both pushing AI these days. Is this because of industry (gold making the rules?) or would it be happening independently?

That leads to the second question – are we moving in the right direction? I think that question may be different for K12 and for university. Personally, I still think CS education in K12 should be about understanding and not vocational. Someone else can address higher education but K12 should be about preparation for life and not for vocation at least in comprehensive schools.

So is AI the right direction? I think it is indisputable that AI is important to learn. Students should learn prompting and they should learn what AI can and cannot do, They should also learn how to think about what AI should not do. They need to know something about how AI works and that is core computer science.

I think that computer science, in the old analogy, is the dog and AI is the tail. The tail should not wag the dog. Making AI the focus at the expense of basic  computer science would be a huge mistake. We do have to teach the basics that make AI possible. Students need to understand where AI comes from and where it might go. Understanding code is an essential part of that understanding.  There is always going to be more to CS than just AI. We didn’t stop teaching arithmetic when calculators were invented. We should not assume that AI code writers mean we don’t have to stop teaching basic computer science.

CS in K12 should not be just vocational. Is industry driving CS education? I fear they may be. Are we moving in the wrong direction? Maybe. If so, it will be up to educators to provide some course correction. 

Saturday, February 07, 2026

AI Tutors and the Human Connection

I  recently shared at quote on Facebook:

Unless our students know that we care, they will not learn from us.

I made the comment that I wondeedr if an AI teacher will get students to think it cares about them. I really believe that a connection between student and educator is important for a good educational experience. Several people on Facebook indicated that they think that an AI tutor will be able to convince students that they (the AI tutor) cares. Is a major concern I have about AI tutors misplaced?

Thinking about this, I recalled variations of the saying:

The secret of success is sincerity. Once you can fake that you’ve got it made.

Can Artificial Intelligence tutors fake caring about students? I wonder.

Initially, I thought, no, not going to happen. Now I am not so sure. I have been thinking of my own interactions with Alexa from Amazon via their smart devices. Attempts to be personal with the AI, for example, saying “thank you.” elicit what feel a lot like personal responses. Alexa wishing me a “good night” or a suggestion to “keep warm out there.”

I recently had a conversation of sorts with Copilot about books I am interested in reading. The conversation felt a lot like taking to a real person.

Also, a friend of mine (Richard Seltzer) recently shared a book he was working on titled “How to Partner with AI: A New Kind of Relationship” (A pre-publication pdf of the entire book is available here for free.) The book reads a lot like a conversation between two real people rather than a person and a computer program. In fact it feels a lot like a conversation among friends.

So maybe AI tutors will get students thinking they care. Whether the program is faking that it cares or really cares is more of a philosophical question than a practical one. It’s a question well worth talking about of course. Just as asking if computers really think or if they can be truly creative. Practically speaking though does it mean that AI tutors can replace human teachers? I think it is more complicated than that.

There is also the matter of what to teach. I read someone recently saying that human teachers teach what they want but that students are not interested in learning and that AI tutors will teach things that students are actually interested in learning. That may be true but is that what we really want? Would that meet the needs of a real education?

What I see often is autodidacts attempting to promote learning that works for them as being the way that everyone should learn. That is decidedly not the case. Many, perhaps most, students need some external motivation and some direction.

I love the idea of students learning more about the things they are interested in knowing. There are things that student need to know though and students are not always interested in learning them all. We have required courses for a reason! Learning all about football at the cost of not learning any mathematics is probably not a good thing. Students are masters of distraction – both of becoming distracted and distracting others. Others includes instructors!

Perhaps that will work out. Perhaps an AI tutor will work mathematics into the football lesson. It could happen but will it?

There is also the question of who is teaching the AI. Will the AI tutors have a good bias or a bad one? Will it be trained to better society or to make it more compliant? Will the students wind up retraining the AI in unhealthy directions? We have seen AI chatbots turn very ugly with help from the internet. Who will monitor these AI tutors? Parents? Not likely.

We’ve also seen AIs get a lot of things wrong. They are not very good at validating sources of information. Human educators are a lot better at that.

I can imagine AI tutors working out very well. I can also imagine them turning out very badly. What I am strongly concerned about is AI tutors for the poor with human educators for the rich. Perhaps the human teacher supplemented with an AI tutor or an AI tutor supplemented with a human supervising instructor. But  it is clear to me that many of the rich are more interested in using AI to save money by replacing people and not as much of making things work better.

Relegating the masses to AI tutors is a high risk proposition with potential of holding the masses back. Autodidacts with high self motivation and a good AI tutor may go far. I am not sure that is the way to bet for most students though.

Sunday, February 01, 2026

Reminiscing - When Computers Had Lights

Back before the personal computer age, computers had lights and toggle switches. One could use the switches to program computers and read answers in the lights. All in binary of course. We also used these tools for debugging. One could enter a memory address using the switches and see what was in the location, data or instruction, in the lights.

If a computer program was hung in a loop one could halt the computer and see what address and instruction was part of the loop. It was a useful debugging tool. Similarly if the computer halted for some reason an error code might be displayed in lights.

It wasn’t all seriousness though. Many operating systems would display something in the lights when the computer was idle – not doing real work. Usually this was some sort of animation – lights racing though the strip and rows of lights. Digital Equipment Corporation had a computer type called the PDP-11 that supported a number of different operating systems. Each OS had it’s own idle loop light display. One could walk into a computer lab, typically at night when no one was using the computers, and tell which OS was running on which computer just by watching the lights.

Some manifaxine computers had a lit of lights. A company called Burroughs had one large computer that would display the company logo in the lights when it was idle. Now you never really want to see that display if you owned that computer. It was frightfully expensive to buy and operate so you really wanted it to be doing real work 24/7. One potential buyer wanted their company logo to display when the computer was idle. Vanity perhaps? Anyway, silly as it was, as I recall, the program change was made and the sale went though.

Today, those sorts of lights are an unwanted, and generally unneeded, expense. I do sometimes miss those simpler days though.

Friday, January 30, 2026

CS Teacher Improvement Through Observation

I remember the first time I was observed by a principal. Brilliant man with two masters degrees and ABD PhD. He told me that he didn't understand much of what I was teaching but the students seemed to be getting it and the class ran smoothly. Not much in there to help me improve.

I believe that teaching CS is different from teaching most subjects. But each subject probably has its own nuances. That's why I think that teachers need specific training in teaching their particular subject. I know that there are MS degree programs in teaching reading and, I think, math. Probably more than those as well

There is limited training in how to teach CS though. There are some degree and certificate programs in teaching CS. As states increasingly require certification to teach computer science there will be more I am sure. Most CS teachers have to figure it out on their own though.

I think we have a lot to learn about how to teach CS well. There are a few people doing research in CS education. A lot of it gets disseminated at SIGCSE which can be hard for K-12 CS teachers to attend. That is both because of cost and because it happens during the school year. A lot of teachers have very limited options for missing school days. If nothing else it is a lot of work to create good sub plans!

Many teachers are resistant to sessions that are research based. That is often because they have had too many professional development sessions that year after year replace the previous research based methods without giving any one method a fair chance. Or worse, having failed.

It would be nice is teachers had more opportunity to observe experience CS teachers teach. (Both Mark Guzdial and Mike Zamansky have blogged about that recently – blog post links below) BTW if you ever get a chance to hear Mark Guzdial present I recommend that you do. Especially if the topic is how to teach.

In an ideal world, CS teachers would get to observe teachers in the building where they teach. For a variety of reasons, not the least of which is that many K-12 CS teachers are the only CS teacher in the building, that is often not possible.

CS conferences are a mixed bag. Yes, there are some great presenters. Many of them do try to model good teaching practice. There are not a lot of talks on how to teach though. I gave one at CSTA Online six years ago. (How is it that long ago?) It was well received but we could use a lot more that talk about and modeled how to teach CS.

I think we could use more talk sessions on the conference “hallway track” that informal, unscheduled time when teachers find themselves sharing ideas with like minded people.

At the heart of the issue is that teachers have to be about constant improvement. There is a difference between five years of experience and one year of experience five times.

Anyway, please read the posts linked below. Smarter people than me.