The web (well, a small part of it) has been abuzz with a new company claiming to provide ‘a near-linear to above-linear increase in performance’.
The solution consist of a chip in front of the graphics cards, which ‘decomposes a complex scene into well-balanced parallel tasks, and then recompose each task into the correct final image with no overhead’, of course transparently for the latest versions of OpenGL and DirectX.
Color me skeptical. First of all, decomposing an OpenGL stream correctly is not easy, and doing it in a way to provide a linear speedup is even harder. Keeping it compatible and up-to-date with the latest extensions will take quite some resources. Then there is the little problem that the single application thread has to be capable of feeding the GL pipeline fast enough, and the compositing has to take things like transparency and antialiasing into account.
I guess we’ll have to wait for the vapor to go away and for real hardware to arrive. I know I will keep an eye on it, this could be useful for Equalizer on visualization clusters.