Someone posted a questions asking which sort of programmer one was from a list of ways to add one to a variable:
X++
++X
X+=1
X = X + 1
Actually the initial question didn’t include ++X but it soon showed up in replies. With all these ways to do what appear to be the same thing it is no wonder students get confused.
Daniel Moix replied to my Facebook post with X = X++ This doesn't work (at least not in C# or Java). While one might expect that the value in X would be increased by one after the statement executes it is in fact unchanged. X = ++X does work as you would expect though. It’s not surprising that students, clever people that they are, come up with variations that should work in their eyes but do not work.
Why do we have so many ways to add one? I can’t speak for the language designers (I assume most of this started with C or some earlier language) but lots of us like shortcuts. And it seems like every programmer has his or her own idea of how things should be done. There are few programs that have been around for any length of time that have only one way to do anything.
All of this is great for experienced programmers but can be a nightmare for novices. I used to debate in my head if I should even show all of the ways above. X=X+1 and X++ covers most cases for beginners. Why confuse them?
I usually did briefly talk about X+=1 because a) students are likely to see it in other code and b) it is useful when adding (and other operations) where one is not changing by one.
This all adds some cognitive load. I think that teaching all the various ways at one time can be a bit much. It may be better to add things as they are needed. For example, maybe waiting until teaching loops to introduce X++ and X+=1. That context and specific use may be helpful. I didn’t do that before but I wonder if I should have. Opinions are welcome.
4 comments:
It's funny, I'll use x++ but ++x is probably easier to understand and is less error prone for a beginner.
I'll usually stay away from all but x=x+1 for as long as I can and when they come up, which they invariably do, I'll introduce them.
Since I teach several languages and have the memory of a squirrel I always start with x=x+1. I know the syntax is correct for every language I know.
I ended up here as a result of the discussion on Facebook. There, nobody chose x=x+1. Before I added my thought there, I read this post and I think Garth's comment above nailed it. Particularly in K-12, it's nice to have a clear way for students to be successful. The others are functional but why mess with something that works? It's a topic to introduce when students are reading someone else's code perhaps.
X += 1 and x = x + 1 are basically the same in terms of implementation. I use x = x + 1 (apologizing to my algebra-loving students) until x is some horrendously long variable using array indices and/or dot notation; that's when I show the shorter way to write it.
x++ on the other hand, is a different beast. It always means incrementing by one, and in my computer systems course I have students write low-level code for each. That's when they figure out that using x++ can save several machine cycles. I see no need to introduce x++ until they have the background to understand it.
Post a Comment