One of the hardest things about teaching students about programming is getting them to properly test their code. Testing is something I explain regularly and go over from the very beginning of a course. Every course. And yet, time and again I run student programs myself and find they are not giving the correct answers.
Anyone who has every taught students programming has looked at the results of a student program and asked them “is that the right answer?” and had them reply “I don’t know. I think so.” Students seem to assume that if the program compiles and runs and gives an answer then it must be the right answer. It’s as if they expect magic from the computer sometimes.
One of the very first projects many students see is temperature conversion from Fahrenheit to Celsius. Well at least it is common in the US where we are still using Fahrenheit but our neighbors are using Celsius. Students will plug in random numbers, get an answer and declare “it works.” Now I test that sort of program by using temperatures I know the answer to: boiling and freezing temperatures of water and of course –40 which is the same on both scales. This seldom seems to occur to students though. I wish I knew why but at least it is the basis for a good conversation on test data.
Should I provide test data with expected results for projects? That’s pretty easy for some projects. It’s not as easy for others. I also want students to figure out how to create good test data though and handing out too much test data gets in the way of that. There is also the problem of students hardcoding results into a project. That hasn’t happened in my classes in a long time but I know it does happen.
We’ll be having yet another conversation about testing in my class this week. Perhaps I just need to keep hammering it in. Anyone have any advice for me? I’m open to suggestions.
Caveat: I'm a complete novice, in my first year of teaching, but...
ReplyDeleteWhat you describe reminds me of the difference found by the Leeds Working Group reported by Raymond Lister in his 2008 ACE keynote. Essentially it boils down to the students apparently not having the same mental model for this as experts do. Showing worked examples would seem to be the way to start this - giving them sample programs and test data to prove it works or doesn't work. Then perhaps you could set them some challenges - give them code (or maybe their peer's code) and ask them to prove that it isn't correct. If they already know about testing from the worked examples they should make the connection that the way to show that a piece of code is wrong is to produce some test data. The programs they test could get more difficult in the sense that you start with code that always produces the wrong output, then code that mostly produces the wrong output then code that usually produces the right output but sometimes does not.
After this process they would hopefully get the idea that code that compiles can be wrong and the way to satisfy yourself that your code is not wrong is to test it with test data.
When you assign the problem ask up front what would be good test values for this program? Why have test values? Can they make a routine in the code that will automatically execute those test values?
ReplyDeleteAnthony’s explanation that students don’t have the same mental model as experts seems correct (of course, I imagine that’s the case not just with testing but with programming in general). Even my advanced students, encountering C and malloc and free for the first time, say things like “My code basically works except for one bug, where it segfaults.” The idea that you could know that your code “basically works” even though you can’t possibly have tested it on any real inputs (because it crashes) is ridiculous to an experienced programmer. But not to the kids, who think that since their code makes sense to them, and compiles, the only obstacle now is getting rid of that segfault.
ReplyDeleteUnfortunately, I don’t think providing sample tests necessarily helps: rather, running the teacher's tests just becomes one more automatic indicator of success or failure. (It compiles, it runs, the test script prints “ALL OK”, it works!) It saves the teacher time — you get to grade a version the student has ensured passes tests. But students may even come to view testing as not-their-responsibility or even beyond their capabilities; it’s something the teacher does when a problem is assigned. Probably the only way to really get them to understand testing is to devote significant class time and homework assignments to activities designed to give students experience finding bugs, writing tests, imagining failure cases, etc.
Provided that you've already exposed them to some of your own testing code I think asking for validation tests as part of the assignment seems reasonable, unless the language requires so much boilerplate for such tests that writing them overwhelms the main assignment. It is important that kids develop the sense that they are responsible for the correctness of the things they write rather than you, as Alex suggested.
ReplyDeleteTime for a discussion along the lines of "How would you expect a car manufacturer to test the software in their products?"?!
ReplyDeleteThis is a "Computer Science" ability as apposed to a "Programming" ability. I do a horrible job at teaching this, too, even though I know what to do.
ReplyDeleteTest-Driven Development (TDD) is what we should do to help with this. A website I like to use is Cyber-Dojo.
TDD is where you write a test first and then write code to make it pass. The idea is to start with the first, simplest test and then the simplest code to make it pass. This helps with breaking the larger programming problem into smaller pieces. Most of us already do this (in our own heads) as we code.