beefyguy said:<P> <BLOCKQUOTE><font size="-1">quote:</font><HR>Then instead of seeing these 8 and 900 mhz systems cooled by refridgerators, we would see 8 100-200 mhz chips, all working in perfect sync, the bounds would be limitless! Imagine 64, or 128 processors, each at 100-200 mhz, they could be MUCH more stable, and we could go back to the true idea of RISC! With this type of setup, we could work on creating better designs, and architectures for chips. Thus, increasing our ability to create a more powerful single chip platform for portable devices.<HR></BLOCKQUOTE><P>I'm not sure if you're saying this because you actually believe it, or because SMP is Apple's latest fad du jour. Anyway, the problem with your scenario is that while it's easy to write software that runs very well on a single fast CPU, it's a lot harder to write software that runs well on many processors. Most problems just aren't solved more efficiently that way, and my 800 MHz uniprocessor system would beat your 64-way 100 MHz multi-processor system at most common tasks.<P>Anyway, these ideas have been around for decades, but no one has been able to come up with enough effective uses for these types of systems to make them popular enough to be mass produced. (Do a web search for Danny Hillis' 'Thinking Machine', for example.) The bottom line is that with massive parallelism, instead of "creating better designs, and architectures for chips", we'd be scratching our heads writing software that keeps all those 64 processors busy at the same time, as well as trying to design a memory bus that would be reasonably efficient under these conditions.<P>There's worse problems, too, like cache coherency. Using Von Neumann architecture, your 64 processors would spend all their time waiting for one another to write to memory and snooping each other's caches...<P>And this has nothing to do with RISC, BTW; if you want to talk seriously about these ideas, please leave the buzzwords at home.<P> <BLOCKQUOTE><font size="-1">quote:</font><HR>There was a big discussion at AI about multiple processors able to yield 2.2 times performance<HR></BLOCKQUOTE><P>LOL! Sounds like the usual AI discussion. Does this mean that the MacOS (which I presume is the 'crap' you're referring to) enforces 17% overhead on any operation?<P>(x + 100) / x = 2.2<BR>x + 100 = 2.2x<BR>1.2x = 100<BR>x = 100/1.2 = 83.33333<P>Even given how lame the MacOS is, this is hard to believe. Performing a CPU intensive task your app will peg the CPU at 98 or 99%, making the performance increase multiplier under optimal conditions immeasurably over 2.0.<P>And of course, optimality will never be reached because of mutex locks, bus contention, cache snooping, disk contention and programming inefficiency. It would take a perfect program to achieve this, and that program doesn't exist.<P>[This message has been edited by IMarshal (edited December 27, 1999).]