High end video, VFX, and modeling.Honest question: what are the people who need the Ultras doing with them?
We're using Pro Tools and the Dolby Atmos Renderer to mix albums and TV shows on one Mac, where it would have taken two or three Macs in the past to do the same job. Logistically easier. Mac Studio Ultras are amazingly cheap on the price/CPU vector.Honest question: what are the people who need the Ultras doing with them?
I can do pretty much any encoding that I need on an M3 at a quite zippy pace. Maybe for a production studio there would be an advantage but as a single user, the M3 is better than anything I've used in the past.This is entirely from memory, but doesn't the M3 lack the AV1 encode functionality that M4 has? This seems most likely the reason for segmentation, here, to try and force studios into full-fat Pros with coprocessors/GPUs instead of Studios.
Neither M3 nor M4 have AV1 encode, only decode.This is entirely from memory, but doesn't the M3 lack the AV1 encode functionality that M4 has? This seems most likely the reason for segmentation, here, to try and force studios into full-fat Pros with coprocessors/GPUs instead of Studios.
M3 and M4 have different CPU architectures and manufacturing nodes. Calling this M3 means this is using the previous ones.It's odd to me that they would use the m3 moniker for a binned version in the ipad air and then specify an old chipset for this Ultra. It's just a name, why not call it something else? M3.5? M4SE?
Both M1 and M2 had Ultra chips.There's another possibility... (Having seen this in my own career)
You have two sets of teams working silicon, the "odd" and "even" teams... so you end up with M1/M3/M5(eventually) and that 'odd' team works the challenges that result in 'Ultra' Silicon
You have the 'even team' that does the M2/M4 etc.
Both teams highly capable, both working the same overall product: but slightly different focuses, especially due to the timelines involved
The pros cant use discrete GPUs, Apple doesnt support third party GPUs anymoreThis is entirely from memory, but doesn't the M3 lack the AV1 encode functionality that M4 has? This seems most likely the reason for segmentation, here, to try and force studios into full-fat Pros with coprocessors/GPUs instead of Studios.
Very diminishing returns, except when that $1500 offsets against time saved over the 2-3 years the Mac will be in use for creative pros saving them a minute here and there over these 2-3 years - it adds up.Wow, it's a $1500 price bump to add 4 performance cores to the Ultra. I'll consider the base Ultra, but there's no way I'll pay $1500 for another 4 cores.
Software development thanks to the higher core count, LLMs thanks to unified memory and the higher RAM limit allowing you to use larger models than even the highest-end nVidia H200 cards (albeit slower processing due to a slower GPU/NPU).Honest question: what are the people who need the Ultras doing with them?
Honest question: what are the people who need the Ultras doing with them?
Honest question: what are the people who need the Ultras doing with them?
My (hopefully) intelligent guess is stuff like software dev & compilation, simulation, visualization, “high end” film & audio, scientific computations, etc where time saved = a lot more money saved.Honest question: what are the people who need the Ultras doing with them?
or medium size LLMsMostly if you want to run everything from a Ram Drive.
Strix Halo (and Digits too, from what I've seen) have the memory bandwidth of an M4 Pro. They don't compete with an M4 Max or M3 Ultra. Though of course Digits at least benefits from almost the entire AI software environment targeting nVidia GPUs.Software development thanks to the higher core count, LLMs thanks to unified memory and the higher RAM limit allowing you to use larger models than even the highest-end nVidia H200 cards (albeit slower processing due to a slower GPU/NPU).
The new AMD Strix Halo (and desktop variants theof may give it a run for the money, though, and of course the nVidia Digits.
My 2022 $5000 M1 Ultra 128GB Mac Studio is obsoleted by the M4 Mac Mini, however, so these machines have a short relevance span, even if they are usable for more than 3 years,
Can't speak for everyone, but for me I have a home-grown astronomy app. The telescope shoots at 24MP, in mono, with 7 (automated) filter wheels, for R,G,B,L,S2 H-A, O3 images. These are all combined to provide a multi-spectral image, but to provide temporal averaging, I'll typically shoot a few hundred frames over a session, with the mount tracking what I'm shooting.Honest question: what are the people who need the Ultras doing with them?
I think that depends on your use-case. See my reply just above, but I got about 1.5x the performance out of the M4 Max over the M1 Ultra.An M4 Max has about the same power as an M1 Ultra.
The ultra had the advantage that no new design was needed. Just an interconnect to double cost and power. I could imagine they can design a max-like chip with 16 or 20 instead of 12 performance cores. I can’t quite see why you wouldn’t base it on an M4 Max with extra cores instead of M3 with extra cores.
Not cheap, yes, but $1,500 also includes 20 more GPU cores.Wow, it's a $1500 price bump to add 4 performance cores to the Ultra. I'll consider the base Ultra, but there's no way I'll pay $1500 for another 4 cores.
Not unusual in this price range for desktop workstations - but yeah, it's cool here as it means it can benefit from that insane 819.2GB/s bandwidth.512gb of ram on your desktop is amazing.
the fact that someone would consider a machine obsolete the moment a better one appears just baffles meMy 2022 $5000 M1 Ultra 128GB Mac Studio is obsoleted by the M4 Mac Mini, however, so these machines have a short relevance span, even if they are usable for more than 3 years,
Currently I've got an M2 Ultra with 128GB RAM. I work on film/streaming VFX. For me it's software development and evaluating Houdini sims, Maya animation and rigging, Nuke compositing, rendering, etc. Mainly I need the RAM but the CPU/GPU performance is great too.Honest question: what are the people who need the Ultras doing with them?
When the M4 first came out there were reports that the architecture of the M4 was quite different from the M1 through M3 architectures. This led to two different speculations in the industry.What I don't understand is, since the "Ultra" chips are just two Max chips fused together, shouldn't it be (relatively) trivial for them to make an M4 Ultra given the M4 Max already exists?
The M3 Max die (TMQG40) did not have the D2D interface. This is a different die. If we eventually get the die markings from the M3 Ultra I reckon it will bear this out. The original source for this is TechanLye which is a completely reputable reverse engineering firm.The existence of the M3 Ultra puts to rest some lightly sourced speculation from last year, suggesting that the M3 Max was shipping without the silicon used to fuse two Max chips together into a single Ultra chip.
Thanks, that's a good point. I'm personally focused on the CPU, but obviously others may benefit from more GPU cores.Not cheap, yes, but $1,500 also includes 20 more GPU cores.
In my case I guess you could say "data science" (although I'm old, so I'd call it 'statistics").Honest question: what are the people who need the Ultras doing with them?
At $4000 before addons, I don't think a consumer will be considering the M3 Ultra at all. The Pro who knows what they need will lap these up, but the Ultra is not targeted at the consumer.Fwiw it’s nice of Apple to admit they are using M3 but it’s super confusing to the consumer that M3 Ultra is better than M4 Max
Exact, it is clearly revised die for the combined M3 Max, considering they have Thunderbolt 5 and can reach 256 GB each (=512 total)@Andew Cunningham it looks like you've got a copy paste error in your table there—not every Ultra chip has 819.2 GB/s memory bandwidth.
Also:
The M3 Max die (TMQG40) did not have the D2D interface. This is a different die. If we eventually get the die markings from the M3 Ultra I reckon it will bear this out. The original source for this is Hiroharu Shimizu of TechanLye which is a completely reputable reverse engineering firm.