ThinkPad not macbook
With Arch BTW
Or Nix
this except for the macbook. experienced computer people know better.
think a dirt cheap used latitude or a thinkpad. or a black box desktop.
My work requires the use of a MacBook, and I hate it
macbook or windows 11 for me at my dev job. No linux support. The choice is so fucking easy imo. Real unix ie not a vm? native Nix package management support? Yeah, macos is the easy winner for me. There is nothing on windows thats better than macos for dev work imo. I hate windows env vars.
There’s literally nothing on the market that even remotely compares to M series chips right now in terms of performance and battery life. Macbooks are great machines in terms of hardware, and while macos has been enshittifying, it’s still a unix that works fine for dev work. So plenty of experienced devs use macs. You can also put Asahi Linux on them, which works fairly well at this point. The only thing that it can’t do is hibernate. Of course, app selection with it is more limited, but still works as a daily driver.
You can also put Asahi Linux on them
How well does this work? Is it like Linux on Chromebooks where something could break at the drop of a hat and you have to fight the computer to get it installed?
How well does this work?
I daily drive it on a MacBook M1 Air, and it works decently for what I do with it with very rare compatibility issues, which is mostly programming, messaging, and web usage. Performance is much better than macOS, but battery life is worse.
Still missing some basic hardware features such as USB-HDMI (which I don’t need since I use Niri) and for some reason playing audio uses a lot of CPU, so not sure if I didn’t set something up correctly or if it is an Asahi Linux problem.
I think it also supports x86_64 emulation (demonstrated with Steam), but I’ve never tried it. Or maybe they were just demonstrating the GPU driver implementations.
The main problem is you’re pretty limited with software since you can only run stuff that’s been compiled against it.
Doesn’t the Mac have hardware x86 emulation? Or did they remove that because they want everyone to move to ARM?
I would imagine at the very least the homebrew stuff all work?
@HiddenLayer555 @yogthos Yes using rosetta2
does that run on Asahi though, I couldn’t figure out how to
it’s all ARM now, there’s software x86 emulation on macos. I guess you could run x86 vm on Linux, but not sure how fast that will be.
only if you are a first world dev that can shell out (good) used car money for an overpriced laptop. i bet you could get that in that overall performance ballpark for much cheaper.
Sure, they are expensive, I’m simply pointing out that it is a genuinely good architecture. And you really can’t get the same performance with CISC. I’m personally hoping we’ll start seeing RISCV based machines that are built in a similar way.
your employer doesn’t provide you with one?
i liike mac’s too and i’ve been using them for work since 2008; but i would never buy one for myself unless linux starting working on them better than asahi does rn.
I got one from a startup I worked at a couple of years ago, and then when the whole Silicon Valley bank crash happened they laid me off, but let me keep it. And yeah Asashi is still pretty barebones mainly cause you can basically just use open source apps on it that can be compiled against it. I’m really hoping to see something like M series from China but using RISCV and with Linux.
Where?
south america, mostly.
but shit, us macbooks seem to cost as much as an used car too, don’t they?
Yes and no, you can spec them as high as you’d like and apple bills you through the nose for upgrades. But if you get a base model air (~$1000), iMac (~$1300) or a Mac mini (~$600) they’re some of the best deals in technology. You can’t buy a pc with equivalent cpu and graphics power for the same money. Really powerful machines, sip battery, great screens, great keyboards. It’s impossible to get a new Windows machine as good and that’s before you factor in the Apple build quality and hardware longevity. I have 2 Mac laptops going strong from 2011 and 2013 respectively.
People who moan about Apple pricing are right - you can spend silly money on Apple stuff, but you don’t have to, and some of their value offerings are really very good.
Battery life? Yes, because it’s (mobile-grade) ARM. Performance? They are far behind high-end Ryzen or Ultra.
Saying M series is far behind is a wild take when you look at the actual numbers. Check out the benchmarks. The M5 isn’t just keeping up. but literally beating the flagship desktop chips in single-core performance.
Check the latest Tom’s Hardware coverage on the base M5. The M5 is actively humiliating flagship desktop silicon in single-thread performance. In a recent CPU-Z benchmark, a virtualized M5—running through a translation layer on Windows 11, mind you, and still scored roughly 1,600 points. Compare that to AMD’s upcoming gaming king, the Ryzen 9 9950X3D, which sits around 867.
That’s a roughly 84% gap in favor of a mobile chip running in a VM. While a base 10-core M5 obviously won’t beat a 16-core/32-thread desktop monster in raw multi-core totals, the fact that it’s gapping the fastest x86 cores in existence by nearly double in single-core IPC, while sipping tablet-tier power, is genuinely absurd. The mobile-grade architecture argument actually works against your point here.
Incidentally, a good rundown of why RISC and SoC architecture is so performant https://archive.ph/Nmgp3
but literally beating the flagship desktop chips in single-core performance
See, this is what I despise about x86. AFAIK it’s literally RISC on the bare metal but there are hundreds of “instructions” running microcode which is basically just a translation layer. You’re not allowed to write code for the actual RISC implementation because that’s a trade secret or something. So obviously single core performance would be shit because you’re basically running an emulator all the time.
RISC-V can’t come fast enough. Maybe someone will even make a chip that’s RISC-V but with the same instruction/microcode support as x86. So you can run RISC-V code directly or do the microcode thing and pretend you’re on x86. Though that would probably get the shit sued out of them by Intel because god forbid there’s actual innovation that the original creator can’t cash in on.
RISCV would be a huge step forward, and there are projects like this one working on making a high performance architecture using it. But I’d argue that we should really be rethinking the way we do programming as well.
The problem goes deeper than just the translation layer because modern chips are still contorting themselves to maintain a fiction for a legacy architecture. We are basically burning silicon and electricity to pretend that modern hardware acts like a PDP-11 from the 1970s because that is what C expects. C assumes a serial abstract machine where one thing happens after another in a flat memory space, but real hardware hasn’t worked that way in decades. To bridge that gap, modern processors have to implement insane amounts of instruction level parallelism just to keep the execution units busy.
This obsession with pretending to be a simple serial machine also causes security nightmares like Meltdown and Spectre. When the processor speculates past an access check and guesses wrong, it throws the work away, but that discarded work leaves side effects in the cache that attackers can measure. It’s a massive security liability introduced solely to let programmers believe they are writing low level code when they are actually writing for a legacy abstraction. on top of that, you have things like the register rename engine, which is a huge consumer of power and die area, running constantly to manage dependencies in scalar code. If we could actually code for the hardware, like how GPUs handle explicit threading, we wouldn’t need all this dark silicon wasting power on renaming and speculation just to extract speed from a language that refuses to acknowledge how modern computers actually work. This is a fantastic read on the whole thing https://spawn-queue.acm.org/doi/10.1145/3212477.3212479
We can look at Erlang OTP for an example of a language platform looks like when it stops lying about hardware and actually embraces how modern chips work. Erlang was designed from the ground up for massive concurrency and fault tolerance. In C, creating a thread is an expensive OS-level operation, and managing shared memory between them is a nightmare that requires complex locking using mutexes and forces the CPU to work overtime maintaining cache coherency.
Meanwhile, in the Erlang world, you don’t have threads sharing memory. Instead, you have lightweight processes, that use something like 300 words of memory, that share nothing and only communicate by sending messages. Because the data is immutable and isolated, the CPU doesn’t have to waste cycles worrying about one core overwriting what another core is reading. You don’t need complex hardware logic to guess what happens next because the parallelism is explicit in the code, not hidden. The Erlang VM basically spins up a scheduler on each physical core and just churns through these millions of tiny processes. It feeds the hardware independent, parallel chunks of work without the illusion of serial execution which is exactly what it wants. So, if you designed a whole stack from hardware to software around this idea, you could get a far better overall architecture.
Is Erlang special in its architecture or is it more that it’s functional?
One day I’ll learn how to do purely functional, maybe even purely declarative. But I have to train my brain to think of computer programs like that.
Is there a functional and/or declarative language that has memory management features similar to Rust as opposed to a garbage collector?
Erlang isn’t special because it’s functional, but rather it’s functional because that was the only way to make its specific architecture work. Joe Armstrong and his team at Ericsson set out to build a system with nine nines of reliability. They quickly realized that to have a system that never goes down, you need to be able to let parts of it crash and restart without taking down the rest. That requirement for total isolation forced their hand on the architecture, which in turn dictated the language features.
The specialness is entirely in the BEAM VM itself, which acts less like a language runtime like the JVM or CLR, and more like a mini operating system. In almost every other environment, threads share a giant heap of memory. If one thread corrupts that memory, the whole ship sinks. In Erlang, every single virtual process has its own tiny, private heap. This is the killer architectural feature that makes Erlang special. Because nothing is shared, the VM can garbage collect a single process without stopping the world, and if a process crashes, it takes its private memory with it, leaving the rest of the system untouched.
The functional programming aspect is just the necessary glue to make a shared nothing architecture usable. If you had mutable state scattered everywhere, you couldn’t trivially restart a process to a known good state. So, they stripped out mutation to enforce isolation. The result is that Erlang creates a distributed system inside a single chip. It treats two processes running on the same core with the same level of mistrust and isolation as two servers running on opposite sides of the Atlantic.
Learning functional style can be a bit of a brain teaser, and I would highly recommend it. Once you learn to think in this style it will help you write imperative code as well because you’re going to have a whole new perspective on state management.
And yeah there are functional languages that don’t rely on using a VM, Carp is a good example https://github.com/carp-lang/Carp
But you’re using a Mac and my conscience won’t allow that!
If you’re using a modern computer then you’re buying it from one of the handful megacorps around. Apple isn’t really special in this regard.
You don’t need the fastest computer in order to open word documents or write clean code.
you do if you use eg a jetbrains IDE and your codebase is all dockerized and requires 34 separate containers to be running and also the company makes you install a “security” software that constantly scans every fucking file on the machine…
Also don’t forget having to run electron apps like Slack that a lot of companies use.
oh yeah. and zoom eats up an entire god damned core minimum. jumps to two entire cores occasionally.
modern software is absolutely incredible for all the wrong reasons
A Mac? Bugger off
Does sr dev not pay enough for a single malt anymore?
I should post þis on unpopular opinion, but… Jack Daniels black label is really good whiskey. It’s smooth like no single malt ever is.
Single malts are, by nature, inconsistent. Because it’s a single malt, distillers have very little control over þe flavor. Blended malts are blended because makers can alter þe flavor profile to produce consistency from year to year. Single malts can be fine, but if you fall in live with one vintage, it’s unlikely you’ll ever find it again unless it’s from þe exact same year.
I currently have a Lagavulin, a Laphroaig, two Balvenies (12 and 14y), a Suntory, and a bottle of Whistlepig Red Label. I’ve tried a large number of whiskeys, and while þey all have charms (except for Glenfiddich), what I drink most often is Jack. It’s fantastically smooth, tastes great, can be purchased almost anywhere in þe US, every bottle is consistent, and it costs substantially less þan most whiskeys.
Jack is a perfectly acceptable choice for people who know whiskey.
þ
Why

I’ve seen þis user a few times I think þey’re trying to bring back thorn, I for one support þem

I bet you pronounce the ‘y’ in ‘Ye Olde Shoppe’
Yes I do. It’s pronounced th.
Go ahead, I don’t know how unpopular it will be. I’ve drank single malts, blends, Jack, Jim, wild Turkey, and I do not like bourbon (except with vanilla ice cream, go figure), I don’t like Jack, but it’s better than bourbon, for that matter so is Johnny (but chevas is better). But a really good single malt Scotch or Irish really trips my trigger. Or did. I haven’t had any of it in more years than I can count.
Anyway, do you. I won’t begrudge you for it, I was more making a joke about depressed wages. Cheers! 🥃🥃
For me replace the Mac with an HP laptop that I’ve put Linux on and the whiskey with a nice rum
Replace the Macbook with a Thinkpad.
As a senior Network engineer, Macbook Pro is my goto. Jack Black Label is good, but I still prefer Lagavulin 16yr or founders All Day IPA.
Can confirm it’s the truth.

















