Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
Yeah, the image bytes are random because they’re already compressed (unless they’re bitmaps, which is not likely).
camelCase for non-source-code files. I find camelCase faster to “parse” for some reason (probably just because I’ve spent thousands of hours reading and writing camelCase code). For programming, I usually just use whatever each language’s standard library uses, for consistency. I prefer camelCase though.
OSMC’s Vero V looks interesting. Pi 4 with OSMC or Librelec could work. I’m probably going to do something like this pretty soon. I just set up an *arr stack last week, and just using my smart TV with the jellyfin app installed ATM.
My PC running the Jellyfin server can’t transcode some videos though; probably going to put an Arc a310 in it.
I think most projects left Sourceforge after they started putting adware into they’re downloads.
I’ve used this before: https://github.com/wilicc/gpu-burn?tab=readme-ov-file
Yeah, it may be a driver issue, Nvidia/pytorch handles OOM gracefully on my system.
That seems strange. Perhaps you should stress-test your GPU/system to see if it’s a hardware problem.
SD works fine for me with: Driver Version: 525.147.05 CUDA Version: 12.0
I use this docker container: https://github.com/AbdBarho/stable-diffusion-webui-docker
You will also need to install the nvidia container toolkit if you use docker containers: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
I’ve had unattended upgrades running on a home server for a couple years and haven’t had any issues.
They’re good for media centers, since the support 4k HDR. Can also use Moonlight to stream games from a PC. GPIO is useful, but I guess the PI is overpowered for most GPIO use cases at this point.
I like the Turris Omnia and (highly configurable) Turris Mox. They come with OpenWrt installed.
Some automotive infotainment systems run on Linux.
Old dual-core Pentium, lol (Haswell I think, or something from around that time), 16GB RAM. 5 16TB SATA hard disks.
IDK, looks like 48GB cloud pricing would be 0.35/hr => $255/month. Used 3090s go for $700. Two 3090s would give you 48GB of VRAM, and cost $1400 (I’m assuming you can do “model-parallel” will Llama; never tried running an LLM, but it should be possible and work well). So, the break-even point would be <6 months. Hmm, but if Severless works well, that could be pretty cheap. Would probably take a few minutes to process and load a ~48GB model every cold start though?
ZFS on TrueNAS SCALE (enables RAID-like functionality, along with many other features).
Ext4 or NTFS on everything else, simply because it’s default and I don’t use any advanced features.
This is likely just stock manipulation. Interview was in June, and just now released the day before TSMCs earnings report.
Server side rendering looks like it could be useful. I imagine SSR could be used for graceful degradation, so what would normally be a single page application could work without Javascript. Though, I’ve never tried SSR, and nobody seems to care about graceful degradation anymore.
They wouldn’t against Israel, for the same reasons they wouldn’t against the U.S. if the U.S. did the same. I’m pretty sure the U.S. is violating international law ATM in regards to the border they share with Mexico, and I expect no action will be taken against them.
Ah, a three star programmer.
It’s good at refactoring smaller bits of code. The longer the input, the more likely it is to make errors (and you should prefer to start a new chat than continue a long chat for the same reason). It’s also pretty good at translating code to other languages (e.g. MySQL->PG, Python->C#), reading OpenAPI json definitions and creating model classes to match, and stuff like that.
Basically, it’s pretty good when it doesn’t have to generate stuff that requires creating complex logic. If you ask it about tasks, languages, and libraries that it has likely trained a lot on (i.e. the most popular stuff in FOSS software and example repos), it doesn’t hallucinate libraries too much. And, GPT4 is a lot better than GPT3.5 at coding tasks. GPT3.5 is pretty bad. GPT4 is a bit better to Copilot as well.
This is more complicated than some corporate infrastructures I’ve worked on, lol.