That 4 performance cores allows you to bump up the RAM from 256gb to 512gb (for an additional $4,000). For some server uses it's an overall $5,500 additional cost to get 512gb RAM.Wow, it's a $1500 price bump to add 4 performance cores to the Ultra. I'll consider the base Ultra, but there's no way I'll pay $1500 for another 4 cores.
It's not the yield per se, but the two die need to have matched performance curves. They're probably also from a lower leakage bin.Knowing nothing about the details, but making a completely uneducated guess: You have to have two perfect M3 Max chips and a perfect interposer and you've gotta join them up perfectly. I'm thinking this is an issue with yields. If doing it were easy, we wouldn't have seen delays in the M3Ultra rollout and the M4Ultra would be available on day one. (Do the chip designs have additional P and E cores to improve yields? )
It was a joke. The people who need that machine have jobs that make it worth it. So it doesn’t bother me. Although, 490+ gb of vram on a bus that is nearly a KB is amazing.Omitting FCP is fine, but you'll still want a display, keyboard, trackpad and AppleCare+, which brings the total to... $17064 USD. And that's with just a single Studio Display, not even a Pro Display XDR!
edit: With 8 Pro Display XDRs you can get the total up to $74592, but that's probably a little excessive.
Genuine question - what kind of high end video? Like finishing, color, or ad work? Most post companies I know that do scripted TV like The Bear use basic stations and 1080p proxy workflows with Lucid.High end video, VFX, and modeling.
Not the Ultra, but this question made me remember my old workplace over 15 years ago.Honest question: what are the people who need the Ultras doing with them?
Honest question: what are the people who need the Ultras doing with them?
Since Apple is not going to explain this, do tell us what exactly is the technical problem here. I mean, you sound way too confident not to have some inside information. C'mon, don't be coy.Apple didn't make a decision, rather, something technical left them no choice but they will never admit to that.
The only decision they made was what spin they were going to use to explain it.
-kp
Can a theoretical chip be canceled or delayed?since they keep cancelling and delaying a theoretical quad tile Extreme chip
Well, they've repeatedly cancelled and delayed the Apple Television set.Can a theoretical chip be canceled or delayed?
I got the joke, and was also kidding—for the most part.It was a joke. The people who need that machine have jobs that make it worth it. So it doesn’t bother me. Although, 490+ gb of vram on a bus that is nearly a KB is amazing.
I don’t think such a person is buying a Pro Display XDR everytime either.
I would love to see the output of that. One of those gorgeous images as I can only imagine they are indeed.Can't speak for everyone, but for me I have a home-grown astronomy app. The telescope shoots at 24MP, in mono, with 7 (automated) filter wheels, for R,G,B,L,S2 H-A, O3 images. These are all combined to provide a multi-spectral image, but to provide temporal averaging, I'll typically shoot a few hundred frames over a session, with the mount tracking what I'm shooting.
I'll frequently end up with 50,000 MP of image-data to reconcile, which is a mathematically-intensive process, even when I do it on the GPU. You need to register all the frames with each other before combining them, to get the best image; you need to watch out for Elon [redacted] Musk's satellites all over the place, as well as the occasional plane going overhead, etc. I use an FFT auto-correlation on the GPU to self-register the images but there are other methods.
Once all that is done, there's a massive "combine them into one image" to do... It ends up looking gorgeous though. Overall processing can take hours with everything maxed out.
I used to have an M1 Ultra, bought it as soon as it came out. I upgraded to an M4-max MBP when that came out, and now (even if it's "only" the M3) I'm looking at 2x the CPU/GPU/memory of the M4-max if I got the M3-ultra... I still love the portability of the MBP so I'm a little torn right now, but I am considering it...
For the Ultras released thus far, this is true.One observation that stuck with me was: while they doubled the memory size they're using in their chips over the M2 Ultra, the memory bandwidth is still the same as it was in the M1 Ultra.
Blazing fast, but it's interesting that that component hasn't really changed since the M1.
Gene? Is that you?Well, they've repeatedly cancelled and delayed the Apple Television set.
The first consumer PC with 512GB of it, I think, as of right now.The shared memory is high enough bandwidth you can run large language models locally at decent speed.
Looking what a middle of the road 5070 costs, not a bad deal.Not cheap, yes, but $1,500 also includes 20 more GPU cores.
The M3 manufacturing is quite mature at this point. They may have never intended to create M3 Max silicon with the interconnects on them - and interconnects add to the cost. They are likely doing a separate production of the M3 Ultras with the interconnects added on that specific run of CPU's. There's nothing preventing Apple from changing their silicon manufacturing strategies on their M series chips based on yields and experience. I expect this is how Apple is going to operate for every generation - we're never going to nail down exactly every cycle of CPU introductions and what the features will be. Every generation is going to be slightly different based on whether engineering, manufacturing, and pricing expectations are met.It's not literally as simple as gluing them together obviously - they use die-to-die interconnects which requires its own design considerations. But those considerations would have already been made when the Max chip was designed and shipped. Otherwise, yes, M1 and M2 Ultra were very literally two M1/M2 Max dies joined at the interconnect interface.
Given that M3 Max had no interposer, I'm wondering if Apple is moving away from UltraFusion altogether and M3 Ultra is actually just its own unique die.Nowhere does Apple seem to mention M3 Ultra using UltraFusion like the previous generations.
EDIT: Well, that's wrong. The M3 Ultra press release does say it uses UltraFusion with two M3 Max dies. So hell if I know what's going on.
Working on LLMs, based on what I hear from our ML teams. They want to do as much as possible locally.Honest question: what are the people who need the Ultras doing with them?
yesWhat I don't understand is, since the "Ultra" chips are just two Max chips fused together, shouldn't it be (relatively) trivial for them to make an M4 Ultra given the M4 Max already exists?
Would a productions studio be using hardware compression anyway? At least for h264 and h265, their hardware compression was worse than FFMPEG’s, unless you needed soft-realtime compression or were particularly concerned about power.This is entirely from memory, but doesn't the M3 lack the AV1 encode functionality that M4 has? This seems most likely the reason for segmentation, here, to try and force studios into full-fat Pros with coprocessors/GPUs instead of Studios.
I do FPGA development which has 1-8 hour build times. Cutting those down can really improve the number of test cycles you can do a day. FPGAs are notoriously opaque, so sometimes when debugging a problem I'd use the time during the first build to try 4 different modifications to try to smoke out the bug, set them all building, and then test them once each one finished building. That kind of debugging is really quite mind bending...Not the Ultra, but this question made me remember my old workplace over 15 years ago.
Half of the employees were developers working on a Java stack that took around 20 minutes to restart if you wanted to test your changes. This pushed you to do a lot of changes at a time, or you would spend all day waiting.
Unrelated, somebody really wanted a SSD in their new workstation and it was accepted as a ”test” (due to the extremely high cost). Within days of the workstations arrival SSD’s were ordered for every developers computer that allowed swapping the drive. It dropped the time to restart the services by over 90%, and while the drive was expensive as a computer component, it was not expensive when gaining hours of work time per week.
The only clear advantage the Ultra has is the ability to have more ram. If you need it then either it will make a huge difference or make something new possible. But for some work just doing it 40% faster can make the computer seem cheap.
GPU 3D rendering. Math is quite simple in that case: a M3 ultra will roughly render twice as fast as a M3 max. An 1 hour rendering on a M3 max will render in 30 minutes on an Ultra. At the end of the day you'll have save quite some times that can justify the price.Honest question: what are the people who need the Ultras doing with them?
Late game Civ VIHonest question: what are the people who need the Ultras doing with them?
Typically refurbished Apple devices are 15% below the price of brand new as long as brand new is on sale. If a product stops being sold with a better replacement for the same price, they typically subtract 15% because there is a better product and 15% for refurbished.The best news about new Apple chips is bargains for older Macs as the great hermit crab new shell shuffle paradigm eventually puts a computer that has more power than I need at a price I’m willing to pay.
Yours,
that typical user you all talk about![]()
And if they build too many with interconnect they can still sell them as M3-Max.The M3 manufacturing is quite mature at this point. They may have never intended to create M3 Max silicon with the interconnects on them - and interconnects add to the cost. They are likely doing a separate production of the M3 Ultras with the interconnects added on that specific run of CPU's. There's nothing preventing Apple from changing their silicon manufacturing strategies on their M series chips based on yields and experience. I expect this is how Apple is going to operate for every generation - we're never going to nail down exactly every cycle of CPU introductions and what the features will be. Every generation is going to be slightly different based on whether engineering, manufacturing, and pricing expectations are met.
Apple doesn’t sell that many high end chips. And if you look at how much faster M4 Max vs M1 Max is, many people needed M1 Ultra but half of those will be happy with M4 Max.Since Apple is not going to explain this, do tell us what exactly is the technical problem here. I mean, you sound way too confident not to have some inside information. C'mon, don't be coy.
In terms of numbers you may be right.Without taking contention etc. into account:
M4 Max : 16 M4 cores => 1.25x 16 M3 cores => 20 M3 cores
M3 Ultra : 32 M3 cores / 20 M3 cores => 1.6x the speed of the M4 Max
This doesn't take into account the GPU performance increase - as I said above, a huge amount of the stuff I use these beasts for now is using Metal and GPU compute. 80 GPU cores, all with access to 512GB of RAM is ... breathtaking. I chew up RAM, CPU and GPU like there's no tomorrow with all the image-registration stuff.
I've also been thinking about how I can leverage the "Neural Engine", which is effectively a lot of matrix multipliers. I do a lot of matrix multiplication... The Ultra has 2x the NE core count as well.
the average consumer is not the target market for these expensive machines. The M3 Ultra will be better than the M4 Max for those who need a lot of GPUs or fast RAM. Video and audio processing and LLMs are some uses that come to mind. Those tasks are less dependent on the raw speed of the CPUs and more on GPU and RAM.Fwiw it’s nice of Apple to admit they are using M3 but it’s super confusing to the consumer that M3 Ultra is better than M4 Max
I have got M4 Pro Mac mini with following, Still under 14day return policy Should I return and get The base Model M4 Max Mac Studio or just keep the M4 Pro Mac Mini ? (Value, work, stability Purpose) ?
I Always Keep Mac for 3-4Year. I just do simple Photoshop work here an there very light video editing may be once in or twice a month, but use many application at once in 3 display, Some app runs 24/7 even some chrome tab run 24/7 (Transport Gps Tracking in chrome tab) also many tab open 24/7 for other purpose. Heavy use of Canva. Also always Keep open CRM / help desk software and other business software for few brands. Transfer Large amount of file from and to Synology NAS. Looking after 7-10 Vps and Dedicated Server, Looking after 16-20 PHP SAAS Platforms (All Required Heavy Browser Usage), terminal (like tabby) Also want to try self hosted ai in the Mac like ollama and deepseek. Some Other Networking Softwares as well. Running some docker instance as well.
Currently I have :
M4 Pro Mac mini
14‑core CPU, 20‑core GPU,
64GB unified memory
1TB SSD storage
10 Gigabit Ethernet
Perfectly stated. It reminds me of the charity I used to be a web & software developer at who wouldn't spend £200 or so to buy me a 2nd monitor. Quite a breathtaking false economy on their part, thinking back. That was early in my career, I'd be more forthright and demanding now!It depends on what you do, but for developers there's often a solid business case for faster machines. The biggest cost for many business is salaries. If buying a new machine can make your devs x% more productive, then you can earn back the cost of the machine in a relatively short length of time.
Even if you have a long refresh cycle, even making somebody $1 an hour more productive adds up over the working year.
It is so funny how many buy into the bandwidth marketing bullshit argument. It is not the bandwidth that does the work, it is the cores and if you don't have enough cores and/or they are not clocked high enough, then all the bandwidth is useless.Not unusual in this price range for desktop workstations - but yeah, it's cool here as it means it can benefit from that insane 819.2GB/s bandwidth.
You buy a Mac Studio with 32‑core CPU to make use of these 32 cores. Otherwise you bought the wrong Mac.It is so funny how many buy into the bandwidth marketing bullshit argument. It is not the bandwidth that does the work, it is the cores and if you don't have enough cores and/or they are not clocked high enough, then all the bandwidth is useless.
It is not hard to calculate how much memory bandwidth your system needs to not create a bottle neck there, every PC is designed like that, with fast enough RAM so the cores don't get slowed down.
You do realize that if you make a drainage pipe thicker, the water that drains through it is not going to grow because of that, once there is no bottle neck anymore with performance, then thicker pipes do absolutely nothing and can never be used fully.
Memory bandwidth has been a major bottleneck in computing since the dawn of time. Thats why modern chips have 3+ layers of increasingly faster caches. This video illustrates just how much a modern CPU sits around waiting for memory to return results:It is so funny how many buy into the bandwidth marketing bullshit argument. It is not the bandwidth that does the work, it is the cores and if you don't have enough cores and/or they are not clocked high enough, then all the bandwidth is useless.
It is not hard to calculate how much memory bandwidth your system needs to not create a bottle neck there, every PC is designed like that, with fast enough RAM so the cores don't get slowed down.
You do realize that if you make a drainage pipe thicker, the water that drains through it is not going to grow because of that, once there is no bottle neck anymore with performance, then thicker pipes do absolutely nothing and can never be used fully.
The Macintosh TV came out in 1993? Were you expecting a refresh?Well, they've repeatedly cancelled and delayed the Apple Television set.