I have arrived

Status
Not open for further replies.

kraquen

Ars Scholae Palatinae
748
dajjal-resteeves:<BR>many more people us NT than us smp..noone that uses smp uses win9x..that's not the issue, the issue is a statement of "new tech", by mr. beef, of SMP<BR>this isn't new tech, has/is avail on the win platform for quite some time.. that it's not prevelant says something about the effectiveness of 2 xMhz procs vs 1 2xMhz proc<P>kraquen
 

beefyguy

Wise, Aged Ars Veteran
184
I'm assuming that everyone is talking about the multiprocessor mac at MWSF article. <P>There was a big discussion at AI about multiple processors able to yield 2.2 times performance. I forget exactly how it all went, but in the end, someone was able to show how through one processor handling the 'crap' and the other concentrating on processing, they could experience a 2.2 times performance gain on a single processor system of the same mhz. <P>It is true that it is hard to write good multithreaded apps. But that was almost the basis of the article, that the possibility exists for multiprocessing to truly take hold if an easier way to write these type of apps were to appear. Imagine if you could recompile a program to use multiple processors without changing a single line of code! Then instead of seeing these 8 and 900 mhz systems cooled by refridgerators, we would see 8 100-200 mhz chips, all working in perfect sync, the bounds would be limitless! Imagine 64, or 128 processors, each at 100-200 mhz, they could be MUCH more stable, and we could go back to the true idea of RISC! With this type of setup, we could work on creating better designs, and architectures for chips. Thus, increasing our ability to create a more powerful single chip platform for portable devices. <P>Think ahead.
 

Ophidian

Ars Scholae Palatinae
826
not to mention that there has to be some communication between the different cpu's and that it wont be accross a full speed bus<P>1.7x-1.9x would be all that i would call even theoretically possible for a dual cpu config<P>edit<P>those cpus with a dual core might hit 2x but this will be after hand tuned optimized code running stuff that lends itself to MP readily<P>[This message has been edited by Ophidian (edited December 27, 1999).]
 

IMarshal

Ars Tribunus Militum
1,956
beefyguy said:<P> <BLOCKQUOTE><font size="-1">quote:</font><HR>Then instead of seeing these 8 and 900 mhz systems cooled by refridgerators, we would see 8 100-200 mhz chips, all working in perfect sync, the bounds would be limitless! Imagine 64, or 128 processors, each at 100-200 mhz, they could be MUCH more stable, and we could go back to the true idea of RISC! With this type of setup, we could work on creating better designs, and architectures for chips. Thus, increasing our ability to create a more powerful single chip platform for portable devices.<HR></BLOCKQUOTE><P>I'm not sure if you're saying this because you actually believe it, or because SMP is Apple's latest fad du jour. Anyway, the problem with your scenario is that while it's easy to write software that runs very well on a single fast CPU, it's a lot harder to write software that runs well on many processors. Most problems just aren't solved more efficiently that way, and my 800 MHz uniprocessor system would beat your 64-way 100 MHz multi-processor system at most common tasks.<P>Anyway, these ideas have been around for decades, but no one has been able to come up with enough effective uses for these types of systems to make them popular enough to be mass produced. (Do a web search for Danny Hillis' 'Thinking Machine', for example.) The bottom line is that with massive parallelism, instead of "creating better designs, and architectures for chips", we'd be scratching our heads writing software that keeps all those 64 processors busy at the same time, as well as trying to design a memory bus that would be reasonably efficient under these conditions.<P>There's worse problems, too, like cache coherency. Using Von Neumann architecture, your 64 processors would spend all their time waiting for one another to write to memory and snooping each other's caches...<P>And this has nothing to do with RISC, BTW; if you want to talk seriously about these ideas, please leave the buzzwords at home.<P> <BLOCKQUOTE><font size="-1">quote:</font><HR>There was a big discussion at AI about multiple processors able to yield 2.2 times performance<HR></BLOCKQUOTE><P>LOL! Sounds like the usual AI discussion. Does this mean that the MacOS (which I presume is the 'crap' you're referring to) enforces 17% overhead on any operation?<P>(x + 100) / x = 2.2<BR>x + 100 = 2.2x<BR>1.2x = 100<BR>x = 100/1.2 = 83.33333<P>Even given how lame the MacOS is, this is hard to believe. Performing a CPU intensive task your app will peg the CPU at 98 or 99%, making the performance increase multiplier under optimal conditions immeasurably over 2.0.<P>And of course, optimality will never be reached because of mutex locks, bus contention, cache snooping, disk contention and programming inefficiency. It would take a perfect program to achieve this, and that program doesn't exist.<P>[This message has been edited by IMarshal (edited December 27, 1999).]
 

Evil_Merlin

Ars Legatus Legionis
23,724
Subscriptor
Reading on as IMarshal puts the smack down on the little Apple Boys...<P>Um Resteves, the point is YOU CAN get both SMP systems NOW for x86, and the os (BeOS, Windows NT, Windows 2000, Linux and BSD) to run on said systems...<P>On Apple? Well you have RUMORS of SMP being released at MacWorld, but that will be subject to the release of MacOS 9.0.1 which has just been delayed again (http://www.appleinsider.com)
 

beefyguy

Wise, Aged Ars Veteran
184
I said I wasn't sure of the mechanics of it, I just popped it up for discussion, the 'crap' was the mundane tasks of the CPU, one chips is like the 'controller chip' if I'm remembering this correctly. It does all the reading of the cache, while the rest are just grabbing and processing, doing the grunt work, instead of diggin the trenches.<P>RISC is not a buzz word, in its true form, it is better architecture, however, the G3/G4 aren't true RISC anymore, MOT keeps adding shit. Making it closer and closer to full blown CISC. With a REAL RISC, everything is minimal, reducing the 'crap' to process, allowing more 'real' processing, the rendering and stuff like that.
 
Just to clarify things a bit. As I remember, the "by over 2X" reference was made in regard to the efficiency of the processors in being able to share cache directly along a dedicated cpu bus. It was thought to be an exceptionional state but nevertheless demonstrating the efficiency of the fully MERSI compliant G4 design. <BR>Whether systems actually perform as these claims will only be seen as they actually appear and are tested by real people in real word usage.
 
IBM (who generally know their shit) claim that their biggest baddest mainframe-type machines (RS-6000s or whatever) gain some 97% extra performance when adding the first parallel processor, with each additional processor losing around 3-4% (so the first adds 97%, the next 93%, the next 89%, and so on). And that doesn't continue indefinately.<P>Anyone who claims to add 120% by adding a second processor is talking out of their puckered ringpieces.<P>RISC doesn't go any way towards reducing the OS overhead on a processor.<P>Why does SMP make a system more solid? I've not seen this demonstrated. I've seen so-called 'reasoning' for this assertion; that it somehow makes it easier for the computer by lessening the workload on each processor. Frankly, I don't buy that; none of the machines I've ever used for any prolonged period of time have any problems whatsoever with high processor loads.
 
Status
Not open for further replies.