Someone posted an interesting question on Facebook.
Is using ++i(pre-increment) better than i++(post-increment) ?
My reply was to ask for a definition of “better.” The Facebook posts have a lot of answers from people who appear to not have written a compiler themselves and who making a lot of assumptions. But the point of this post is not to discuss which option is better for some nebulous definition of “better.” Rather is is to think about asking the question of what do we mean by “better?”
There are a lot of what I call religious arguments in computing. Tabs or spaces, curly braces in columns or have the open on the end of a line, what is the best programming language, and on and on. Most of them stem from differing definitions of “better.”
Sometimes things matter very much at one point in time but not at all later and yet the bias from early days remains. There was a time with memory was so tight that a single character variable name was much preferred to a longer more descriptive name because of the space used. Most of my readers probably don’t remember those days but I do. Fortunately I recognized when the change happened. Not everyone did so quickly. This is possibly at the heart of some of the tabs vs spaces argument for at least some old timers.
Other issues have become moot or mostly moot because we have so much smarter compilers and interpreters these days. We have long been at the point where compilers write better low level code from high level code than even the best assembly language coders. I seriously doubt that there is a difference between the code generated for ++i or i++ as the incrementor in a for loop these days. Compilers are too smart for that. Compiler writers think long and hard about those sorts of things.
How doubly dimensioned arrays are declared used to make a bigger difference than it does today. I can remember thinking long and hard about how to declare and iterate through two dimensioned arrays. You pretty much had to know how the compiler would allocate memory and what cache would look like to get optimal performance. I don’t think many people think about that today except for the most performance critical applications. Applications we don’t give to beginners anyway. Compilers do a lot of optimization on this sort of process so we don’t have to think much about it. Artificial Intelligence and machine learning (see also AI-assisted Programming) are probably going to make compilers even better than they are today.
Today I think “better” in terms of programming should mean “easier for programmers.” Easier to write, easier to understand, easier to modify, and to allow programmers to think about the end result and not the assembly/machine language generated. Let the software do what software is good at and people do what people are good at.
5 comments:
There are so many things in programming I do not do anymore. I started programming on TRS-80s and Apple IIes. Keeping track of RAM and disk space (back when it was a 5.25 floppy) was a really big deal. I remember working on a program that would graph orbits given various inputs. It got so demanding that it invaded the OS stack and killed the computer. I tried writing sections of the program in assembler to try to save space and speed up the computations. Now I never worry about RAM or disk space. If my hard drive is getting full I just move stuff to the cloud. Writing "efficient" code never crosses my mind. Smartphones have brought that back a bit but I do not bother with the small apps I build. We discuss it briefly in my programming classes but I always discuss it in the context of why we refactor code even though it works fine. For high school level coding there is almost nothing that will really make an observable speed difference except maybe with turtle graphics.
"Better" code to me means it is easy to read and works logically. The programming assignment my students are finishing now only has about a dozen ways of coding it up. They all work but some are easier on the eyes. I have a tendency to put everything in small functions, the kids not so much but their code is simple and easy to follow therefore it is good to go. Programming should not be a tedious, painful experience full of arbitrary rules that make no difference in the end result. At the high school level it should be fun.
In the context of a for loop, I can confirm that my C compiler (clang) produces identical code for i++ and ++i. But this is definitely situational. j = ++i has a different meaning than j = i++. Better here to me means not writing code where the difference between ++i and i++ matters.
Chris. I'm surprised it generates the same code given that they're defined differently (pre increment vs post increment).
I guess the compiler can figure out when it's in the context of an expression vs standalone but it strikes me as weird that a compiler would optimize to change one instruction to another.
Both ++i and i++ generate an ADD instruction, whether we're in the loop update field or not. Where they differ is in what register is used in the surrounding context. With ++i, the context can just use the register holding i. With i++, a second register may be needed to hold the unincremented value --- but not if the unincremented value goes unused. The difference is in the registers, but not the instructions themselves.
I think it's easier to accept if you think about what happens to loops. While, for, and any other loops all compile down to a reasonably similar structure.
To confirm, I just compiled down these two snippets of code:
int pre(int a) {
int b = ++a;
return b;
}
int post(int a) {
int b = a++;
return b;
}
The pre-increment version generates this assembly:
mov -4(%rbp), %edi # copy a into register edi
addl $1, %edi # increment edi
The post-increment version generates this assembly:
mov -4(%rbp), %edi # copy a into register edi
movl %edi, %eax # copy edi into eax
addl $1, %eax # increment eax
The post-increment indeed requires one more register than the pre-increment to retain the unincremented value.
Post a Comment