I was talking with a friend in the semiconductor business the other night. He told me that processors have hit their speed limit. Over the last 20 years, as silicon is scaled down, it has become faster, and we have benefited from faster and faster processors. But now as they continue to scale down, the silicon becomes leakier, and higher frequencies draw more power to the point of self defeat.
Of course, Moores Law continues and silicon continues to scale down, so we will continue to get more transistors on a chip, it is just that now the chips will not get faster as well as denser. I think that we all instinctively know that processors have hit a speed bump. The manufacturers no longer crow about how fast their chip goes, instead they talk about hyperthreading and dual core. So what my friend was telling me is that this is not just a speed bump, it is the speed limit. His view is that architectural improvements and throwing more transistors into the pot could give us another factor of 2 performance improvement, but that is it.
There is no big architecture breakthroughs on the horizon. The Von Neumann model has been around for almost 60 years. For the last 30 years, it has been criticized for serializing program execution, however nothing better has ever been made to work in a convincing way. Moreover, the Von Neumann model of sequential execution is embedded in the way we think about programming and a huge investment programming languages and all existing programs.
The alternative is that we write parallel programs to run on future generations of multi-core processors. The standard tool for writing parallel programs is threads. I have recently been writing threaded code and I can tell you that it is awful. Absolutely awful. I am going to write more pieces on the problems, but for now I can assure you that the problem of programming with threads is worse than the problem of programming memory allocation, something than many new programming languages have resolved by providing automatic garbage collection.