

I’d like to be able to get touchpads with physical buttons on laptops. Very few manufacturers do them, especially if you want three.
Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.


I’d like to be able to get touchpads with physical buttons on laptops. Very few manufacturers do them, especially if you want three.


I once had dinner with a Stanford professor, years back, who was talking about the fact that he liked teaching in Python because he spent way less time teaching the language and more the higher level stuff that he was actually trying to get across than when he was using C++. Lower barrier to entry for new users. I’d guess that probably in the intervening years, a lot of classes have decided to use it for similar reasons. If you want to teach, I dunno, signal processing and your students maybe don’t have a great handle on the language yet, you want to be spending time on the signal processing stuff, not on language concepts.


My impression from what code I’ve looked at is that little computation is done by the Python code itself, so there’s little by way of gains to be had by trying to use something higher-performance, which eliminates a lot of the reason one would use some other languages.
Python’s cross-platform, albeit with a Unix heritage, so it doesn’t create barriers there. It’s already widely-used, a mature language that isn’t going anywhere and with a lot of people who know it.
It’s got an ecosystem for distributing libraries over the network, and there’s a lot of new code going out and being distributed rapidly.
Python isn’t statically-typed. Static typing can help write more-robust code. If you’re writing, say, the next big webserver, I’d want to have that checking. But for code that may often be running internally in a research project — and this is an area with a lot of people doing research — a failure just isn’t that big a deal. So, again, some of the reasons that one might use another language aren’t there.
And I imagine that there’s also inertia. Easier to default to use what others would use.
If you have another language in mind, you might mention that, see if there might be more-specific things. I could come up with more meaty plausible guesses if what you were wondering is something like “why isn’t everyone using SmallTalk?” or something.


Based on the screenshot in the article, the OLED model has longer playtime; Valve says that the LCD model has “2-8 hours of gameplay” and the OLED “3-13 hours of gameplay”.
Though they do also say that this is “context-dependent”, and I’m sure that you can come up with pathological cases for each. Like, a game that has a nearly all-white screen and runs at 90 Hz is probably relative worst-case for the OLED in terms of battery life, and a game that has a dark screen and runs at a locked framerate of 60 Hz is probably relative worst-case for the LCD.


If they’ve got their heart set on an LCD model, it looks like eBay has a number of secondhand ones.
I don’t own a Steam Deck or intend to — I have more than enough portable electric devices capable of running games that I lug around already — but if I were going to get one, it looks like the OLED model has a 25% larger battery, which would be interesting to me.
Bind mounts aren’t specific to Docker. You’re asking specifically about bind mounts as used by Docker?


I don’t know what YouTube Rewinds are, but are these them? I seem to be able to view them.
https://www.youtube.com/playlist?list=PLTTASUq6isfvyOXnYzM8Jgc28tET8PMc4


There were still many flat surfaces in the world that did not yet have advertisements displayed on them.
From my /etc/resolv.conf on Debian trixie, which isn’t using openresolv:
# Third party programs should typically not access this file directly, but only
# through the symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a
# different way, replace this symlink by a static file or a different symlink.
I mean, if you want to just write a static resolv.conf, I don’t think that you normally need to have it flagged immutable. You just put the text file you want in place of the symlink.
Also, when you talk about fsck, what could be good options for this to check the drive?
I’ve never used proxmox, so I can’t advise how to do so via the UI it provides. As a general Linux approach, though, if you’re copying from a source Linux filesystem, it should be possible to unmount it — or boot from a live boot Linux CD, if that filesystem is required to run the system — and then just run fsck /dev/sda1 or whatever the filesystem device is.
I’d suspect that too. Try just reading from the source drive or just writing to the destination drive and see which causes the problems. Could also be a corrupt filesystem; probably not a bad idea to try to fsck it.
IME, on a failing disk, you can get I/O blocking as the system retries, but it usually won’t freeze the system unless your swap partition/file is on that drive. Then, as soon as the kernel goes to pull something from swap on the failing drive, everything blocks. If you have a way to view the kernel log (e.g. you’re looking at a Linux console or have serial access or something else that keeps working), you’ll probably see kernel log messages. Might try swapoff -a before doing the rsync to disable swap.
At first I was under suspicion was temperature.
I’ve never had it happen, but it is possible for heat to cause issues for hard drives; I’m assuming that OP is checking CPU temperature. If you’ve ever copied the contents of a full disk, the case will tend to get pretty toasty. I don’t know if the firmware will slow down operation to keep temperature sane — rotational drives do normally have temperature sensors, so I’d think that it would. Could try aiming a fan at the things. I doubt that that’s it, though.


GPU prices are coming to earth
https://lemmy.today/post/42588975
Nvidia reportedly no longer supplying VRAM to its GPU board partners in response to memory crunch — rumor claims vendors will only get the die, forced to source memory on their own
If that’s true, I doubt that they’re going to be coming to earth for long.


Prices rarely, if ever, go down in a meaningful degree.
Prices on memory have virtually always gone down, and at a rapid pace.
https://ourworldindata.org/grapher/historical-cost-of-computer-memory-and-storage



If consumers aren’t going to or are much less likely to upgrade, then that affects demand from them, and one would expect manufacturers to follow what consumers demand.


If you have or can create a LoRA trained on images of the character you’re presenting, that may be helpful. Or if you have a checkpoint model trained on that character. Would be like having a character that the base model is trained on.


I remember when it wasn’t uncommon to buy a prebuilt system and then immediately upgrade its memory with third party DIMMs to avoid paying the PC manufacturer’s premium on memory. Seeing that price relationship becoming inverted is a little bonkers (Though IIRC Framework’s memory-on-prebuilt-systems didn’t have much of a premium.)
I also wonder if it will push the market further towards systems with soldered memory or on-core memory.


You can have applications where wall clock tine time is not all that critical but large model size is valuable, or where a model is very sparse, so does little computation relative to the size of the model, but for the major applications, like today’s generative AI chatbots, I think that that’s correct.


Last I looked, a few days ago on Google Shopping, you could still find some retailers that had stock of DDR5 (I was looking at 2x16GB, and you may want more than that) and hadn’t jacked their prices up, but if you’re going to buy, I would not wait longer, because if they haven’t been cleaned out by now, I expect that they will be soon.


My limited experience is that stable characters across a number of images are a weakness today, and I wouldn’t be confident that genAI is a great way to go about it. If you want to try it, here’s what I’d go with:
If you can get images with consistent outlines via some other route, you can try using ControlNet to do the rest of the image.
If you just need slight variations on a particular image, you can use inpainting to regenerate the relevant portions (e.g. an image with a series of different expressions).
If you want to work from a prompt, try picking a real-life person or character as a starting point, that may help, as models have been trained on them. Best is if you can get them at once point in time (e.g. “actor in popularmovie”). If you have a character description that you’re slapping into each prompt, only describe elements that are actually visible in a given image.
I’ve found that a consistent theme is something that is much more achievable, in that you can add “by <artist name>” to your prompt terms for any artist that the model has been trained on a number of images from. If you’re using a model that supports prompt term weighting (e.g. Stable Diffusion), you can increase the weight here to increase the strength of the effect. Flux doesn’t support prompt term weighting (though it’s really aimed at photographic images anyway). It’s possible to blend multiple artists or genres as prompt terms.
I’m actually surprised that nobody ever fundamentally reinvented text input for touchscreens in a way that caught on.