Apple has discontinued the Mac Pro – but it’s just the first of the tower computers to go. The rest will follow soon.
Fruit-sniffers extraordaire 9-to-5 Mac got the news yesterday, complete with official confirmation from Apple itself. It’s official and it’s happened, but there have been warning signs for months – in November 2025, Bloomberg’s Matt Gurman said “The Mac Pro is on the back burner.”
The phantom fruit-flingers of Silicon Valley launched the seven-thousand-buck Apple Silicon-based Mac Pro in June 2023, with an M2 Ultra SoC. It sported seven PCIe slots – but the problem was that cash-rich customers couldn’t add the sorts of expansion that normally go into a PCIe slot… to the extent that Apple publishes a page about PCIe cards you can install in your Mac Pro (2023). Notably, the machine did not support add-on GPUs: only the GPU that’s integrated into the CPU complex along with the machine’s RAM and primary flash storage. The machine also had no RAM expansion whatsoever.
Presumably, this limited its appeal for many traditional buyers, and the machine never saw an M3 or M4 model, let alone the M5 SoC that The Register covered shortly before Bloomberg called the Arm64 cheesegrater’s fate.



There are some memory latency benefits to putting memory on a single chip, but to date, that’s largely been handled by adding cache memory to the CPU, and later adding multiple tiers of it, rather than eliminating discrete memory.
The first personal computer I used had 4kB of main memory.
My current desktop has a CPU with 1MB of L1 cache, 16MB of L2 cache, 128MB of L3 cache, and then the system as a whole has 128GB of discrete main memory.
Most of the time, the cache just does the right thing, and for software that is highly performance-sensitive, one might go use something like Valgrind’s cachegrind or something like that to profile and optimize the critical bits of software to minimize cache misses.
I could believe that maybe, say, one could provide on-core memory that the OS could be more-aware of, say, let it have more control over the tiered storage. Maybe restructure the present system. But I’m more dubious that we’ll say “there’s no reason to have a tier of expandable, volatile storage off-CPU at all on desktops”.
EDIT: That argument is mostly a technical one, but another, this one from a business standpoint. I expect PC builders have a pretty substantial business reason to not want to move to SoCs. Right now, PC builders can, to some degree, use price discrimination to convert consumer surplus to producer surplus. A consumer will typically pay disproportionately more for a computer with more memory, for example, when they purchase from a given vendor. If the system is instead sized at the CPU vendor, then the CPU vendor is going to do the same thing, probably more effectively, as there’s less competition in the CPU market, and it’ll be the PC builder seeing money head over to the CPU vendor — they’ll pay a premium for high-end SoCs.
In Apple’s case, that’s not a factor, because Apple has vertically-integrated production. They make their own CPUs. Apple’s PC builder guys aren’t concerned about Apple’s CPU guys extracting money from them. But Dell or HP or suchlike don’t manufacture their own CPUs, and thus have a business incentive to maintain a modular system. Unless one thinks that the PC market as a whole is going to transition to a small number of vertically-integrated businesses that look like Apple, I guess, where you have one or two giant PC makers who basically own their supply chain, but I haven’t heard about anything like that happening.
My parents bought an Acer Pentium 55 (yeah, the one with the floating point issues) after having the 8088 and 386 custom built. It was such a shitshow that when I headed to college, we considered a DEC Alpha … in the end, I got a P-II 266. 64MB of RAM and the worst reliability I’ve ever seen in a hard drive. My roommate had a K6-2 233 with 32MB of RAM. His computer never crashed. For obvious reasons, I built a K6-2 300 system, and I’d not return to Intel for a decade.