Ironically one of the problems teachers can have with teaching about optimizing programs is that computers are a) so fast now and b) getting faster all the time. Students often do not see the need to create more efficient algorithms because they assume that what they have is fast enough and if it isn’t then the next computer they buy will “fix” the problem by being faster. And truth be told with most of the toy programs we are forced to use in a classroom situation (not enough time for really large complicated projects) things do work out that way. But real life is more complicated then that. So we really owe it to students to discuss code optimization (and refactoring which is closely related.)
I’ve had a couple of conversations with students over the years that basically took the form of “yeah it is slow but it works and I’ll never need to run it again.” There is some logic to that of course. I once had a program that I ran maybe once a week. I had thrown it together in a hurry to meet an immediate need. Once I realized I would need it more often I also realized that it was very inefficient. I saw several ways that it could be a lot faster. But I did some math. It took about a minute too long (it could possibly run in seconds) and it would take me at least an hour to re-write it. Was I going to run it 60 times? Probably not so where was the payback for my time? I did get a new computer shortly there after which was fast enough that the run time was cut in half so now the payback time was 120 more runs – so it would really cost me more time to fix then it would save me. That sort of math takes place more than many would think by the way. But sometimes it comes out very differently.
Sometimes the issue is around applications that really need to be fast. Other times it is around hardware that has some limitations that have to be taken into account because changing the hardware is not an option. Cy Khormaee recently talked to Paul Oliver of Legendary Studios to come up with a list of optimizations that should be taken into account when creating games for the Zune device. The Zune was designed as a music player not a game device. Since XNA Game Studio 3.0 (now available in preview) lets programmers create games for the Zune this creates an interesting learning opportunity. Specifically the hardware limitations have to be taken into account if one wants to create a game that performs well enough for people to really enjoy. This is an opportunity to have a real “teachable moment.” The list Paul and Cy have makes for a good read and the start of some interesting discussions.
Also on performance, Dare Obasanjo, who deals with some very large data intensive social networking applications, took a look at some scaling problems with Twitter recently on his blog. He examines how basic design choices can make the difference between an application that really works and one that collapses under the weight of input/output needs. This is a discussion worth reading about as students consider that many of the most important applications revolve around how data is saved and retrieved.
No comments:
Post a Comment