• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    16 hours ago

    There’s literally nothing on the market that even remotely compares to M series chips right now in terms of performance and battery life. Macbooks are great machines in terms of hardware, and while macos has been enshittifying, it’s still a unix that works fine for dev work. So plenty of experienced devs use macs. You can also put Asahi Linux on them, which works fairly well at this point. The only thing that it can’t do is hibernate. Of course, app selection with it is more limited, but still works as a daily driver.

    • HiddenLayer555@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      You can also put Asahi Linux on them

      How well does this work? Is it like Linux on Chromebooks where something could break at the drop of a hat and you have to fight the computer to get it installed?

      • sudoer777@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        36 minutes ago

        How well does this work?

        I daily drive it on a MacBook M1 Air, and it works decently for what I do with it with very rare compatibility issues, which is mostly programming, messaging, and web usage. Performance is much better than macOS, but battery life is worse.

        Still missing some basic hardware features such as USB-HDMI (which I don’t need since I use Niri) and for some reason playing audio uses a lot of CPU, so not sure if I didn’t set something up correctly or if it is an Asahi Linux problem.

        I think it also supports x86_64 emulation (demonstrated with Steam), but I’ve never tried it. Or maybe they were just demonstrating the GPU driver implementations.

    • ☂️-@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      12 hours ago

      only if you are a first world dev that can shell out (good) used car money for an overpriced laptop. i bet you could get that in that overall performance ballpark for much cheaper.

        • eldavi@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          your employer doesn’t provide you with one?

          i liike mac’s too and i’ve been using them for work since 2008; but i would never buy one for myself unless linux starting working on them better than asahi does rn.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            2
            ·
            4 hours ago

            I got one from a startup I worked at a couple of years ago, and then when the whole Silicon Valley bank crash happened they laid me off, but let me keep it. And yeah Asashi is still pretty barebones mainly cause you can basically just use open source apps on it that can be compiled against it. I’m really hoping to see something like M series from China but using RISCV and with Linux.

        • ☂️-@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          south america, mostly.

          but shit, us macbooks seem to cost as much as an used car too, don’t they?

          • Palacegalleryratio [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            46 minutes ago

            Yes and no, you can spec them as high as you’d like and apple bills you through the nose for upgrades. But if you get a base model air (~$1000), iMac (~$1300) or a Mac mini (~$600) they’re some of the best deals in technology. You can’t buy a pc with equivalent cpu and graphics power for the same money. Really powerful machines, sip battery, great screens, great keyboards. It’s impossible to get a new Windows machine as good and that’s before you factor in the Apple build quality and hardware longevity. I have 2 Mac laptops going strong from 2011 and 2013 respectively.

            People who moan about Apple pricing are right - you can spend silly money on Apple stuff, but you don’t have to, and some of their value offerings are really very good.

    • CarrotsHaveEars@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      9 hours ago

      Battery life? Yes, because it’s (mobile-grade) ARM. Performance? They are far behind high-end Ryzen or Ultra.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 hours ago

        Saying M series is far behind is a wild take when you look at the actual numbers. Check out the benchmarks. The M5 isn’t just keeping up. but literally beating the flagship desktop chips in single-core performance.

        Check the latest Tom’s Hardware coverage on the base M5. The M5 is actively humiliating flagship desktop silicon in single-thread performance. In a recent CPU-Z benchmark, a virtualized M5—running through a translation layer on Windows 11, mind you, and still scored roughly 1,600 points. Compare that to AMD’s upcoming gaming king, the Ryzen 9 9950X3D, which sits around 867.

        That’s a roughly 84% gap in favor of a mobile chip running in a VM. While a base 10-core M5 obviously won’t beat a 16-core/32-thread desktop monster in raw multi-core totals, the fact that it’s gapping the fastest x86 cores in existence by nearly double in single-core IPC, while sipping tablet-tier power, is genuinely absurd. The mobile-grade architecture argument actually works against your point here.

        https://www.tomshardware.com/pc-components/cpus/virtualized-windows-11-test-shows-apples-m5-destroying-intel-and-amds-best-in-single-core-benchmark-chinese-enthusiast-pits-ryzen-9-9950x3d-and-core-i9-14900ks-against-apples-latest-soc

        Incidentally, a good rundown of why RISC and SoC architecture is so performant https://archive.ph/Nmgp3

        • HiddenLayer555@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          but literally beating the flagship desktop chips in single-core performance

          See, this is what I despise about x86. AFAIK it’s literally RISC on the bare metal but there are hundreds of “instructions” running microcode which is basically just a translation layer. You’re not allowed to write code for the actual RISC implementation because that’s a trade secret or something. So obviously single core performance would be shit because you’re basically running an emulator all the time.

          RISC-V can’t come fast enough. Maybe someone will even make a chip that’s RISC-V but with the same instruction/microcode support as x86. So you can run RISC-V code directly or do the microcode thing and pretend you’re on x86. Though that would probably get the shit sued out of them by Intel because god forbid there’s actual innovation that the original creator can’t cash in on.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            2
            ·
            3 hours ago

            RISCV would be a huge step forward, and there are projects like this one working on making a high performance architecture using it. But I’d argue that we should really be rethinking the way we do programming as well.

            The problem goes deeper than just the translation layer because modern chips are still contorting themselves to maintain a fiction for a legacy architecture. We are basically burning silicon and electricity to pretend that modern hardware acts like a PDP-11 from the 1970s because that is what C expects. C assumes a serial abstract machine where one thing happens after another in a flat memory space, but real hardware hasn’t worked that way in decades. To bridge that gap, modern processors have to implement insane amounts of instruction level parallelism just to keep the execution units busy.

            This obsession with pretending to be a simple serial machine also causes security nightmares like Meltdown and Spectre. When the processor speculates past an access check and guesses wrong, it throws the work away, but that discarded work leaves side effects in the cache that attackers can measure. It’s a massive security liability introduced solely to let programmers believe they are writing low level code when they are actually writing for a legacy abstraction. on top of that, you have things like the register rename engine, which is a huge consumer of power and die area, running constantly to manage dependencies in scalar code. If we could actually code for the hardware, like how GPUs handle explicit threading, we wouldn’t need all this dark silicon wasting power on renaming and speculation just to extract speed from a language that refuses to acknowledge how modern computers actually work. This is a fantastic read on the whole thing https://spawn-queue.acm.org/doi/10.1145/3212477.3212479

            We can look at Erlang OTP for an example of a language platform looks like when it stops lying about hardware and actually embraces how modern chips work. Erlang was designed from the ground up for massive concurrency and fault tolerance. In C, creating a thread is an expensive OS-level operation, and managing shared memory between them is a nightmare that requires complex locking using mutexes and forces the CPU to work overtime maintaining cache coherency.

            Meanwhile, in the Erlang world, you don’t have threads sharing memory. Instead, you have lightweight processes, that use something like 300 words of memory, that share nothing and only communicate by sending messages. Because the data is immutable and isolated, the CPU doesn’t have to waste cycles worrying about one core overwriting what another core is reading. You don’t need complex hardware logic to guess what happens next because the parallelism is explicit in the code, not hidden. The Erlang VM basically spins up a scheduler on each physical core and just churns through these millions of tiny processes. It feeds the hardware independent, parallel chunks of work without the illusion of serial execution which is exactly what it wants. So, if you designed a whole stack from hardware to software around this idea, you could get a far better overall architecture.

            • HiddenLayer555@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 hours ago

              Is Erlang special in its architecture or is it more that it’s functional?

              One day I’ll learn how to do purely functional, maybe even purely declarative. But I have to train my brain to think of computer programs like that.

              Is there a functional and/or declarative language that has memory management features similar to Rust as opposed to a garbage collector?

              • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                Erlang isn’t special because it’s functional, but rather it’s functional because that was the only way to make its specific architecture work. Joe Armstrong and his team at Ericsson set out to build a system with nine nines of reliability. They quickly realized that to have a system that never goes down, you need to be able to let parts of it crash and restart without taking down the rest. That requirement for total isolation forced their hand on the architecture, which in turn dictated the language features.

                The specialness is entirely in the BEAM VM itself, which acts less like a language runtime like the JVM or CLR, and more like a mini operating system. In almost every other environment, threads share a giant heap of memory. If one thread corrupts that memory, the whole ship sinks. In Erlang, every single virtual process has its own tiny, private heap. This is the killer architectural feature that makes Erlang special. Because nothing is shared, the VM can garbage collect a single process without stopping the world, and if a process crashes, it takes its private memory with it, leaving the rest of the system untouched.

                The functional programming aspect is just the necessary glue to make a shared nothing architecture usable. If you had mutable state scattered everywhere, you couldn’t trivially restart a process to a known good state. So, they stripped out mutation to enforce isolation. The result is that Erlang creates a distributed system inside a single chip. It treats two processes running on the same core with the same level of mistrust and isolation as two servers running on opposite sides of the Atlantic.

                Learning functional style can be a bit of a brain teaser, and I would highly recommend it. Once you learn to think in this style it will help you write imperative code as well because you’re going to have a whole new perspective on state management.

                And yeah there are functional languages that don’t rely on using a VM, Carp is a good example https://github.com/carp-lang/Carp

          • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            If you’re using a modern computer then you’re buying it from one of the handful megacorps around. Apple isn’t really special in this regard.

    • illusionist@lemmy.zip
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      12 hours ago

      You don’t need the fastest computer in order to open word documents or write clean code.

      • huf [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        you do if you use eg a jetbrains IDE and your codebase is all dockerized and requires 34 separate containers to be running and also the company makes you install a “security” software that constantly scans every fucking file on the machine…