Sunday, July 29, 2007

Will our current programming model survive multi-core hardware?

For years the innovations of CPU's have been focused on increasing the speed at which a single sequence of instructions gets executed (mainly by increasing the clock frequency). In the last years this trend is changing from increasing sequential execution speed to parallel execution (multi CPU / hyper-threading / multi core). If this trend continues the number of processor cores will no longer be measured in dual or quad, but in K, M or G.
Today's popular programming languages (both OO as procedural) encourage (or force) us to write software as a sequence of statements that get executed in a specific order. This paradigm is called an execution thread and it maps very nicely on the traditional CPU model which has a single instruction pointer. Executing tasks in parallel is made possible by calling a system API that creates a new execution thread. This is more a platform than a language feature, which requires even more system API's to do the required locking and synchronization.
Generally multi threading is considered an advanced and error prone feature. This is why it is generally used only in situations where parallel execution is an explicit requirement, like in server processes to handle multiple client requests and in GUI applications to allow background processing while the UI keeps handling user events. In situations where tasks could be executed parallel, but there is no direct need to do so, we usually stick to our traditional sequential programming model. This is a great waste of the hardware’s parallel processing ability
My personally experience to non-sequential languages is pretty much limited to SQL and XSLT. Maybe it is time to spread the horizon and take a look at some of the modern Functional languages like F#  and Fortress. Who is with me?

No comments:

Post a Comment