• 1 Post
  • 748 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • Setting aside the specifics of the case, I do think that from a UI standpoint, cars either need to support being left in park without the climate control eventually cutting off or be so extremely clear that this will happen that it would be extremely difficult for a user to miss, as this is a legitimate example of a “fail-deadly” feature.

    IIRC from reading comments from people who have slept in their car and very much want the ability to leave the climate control system active, at least some Toyota models do support leaving the climate control active for extended periods of time, but the car needs to be in “Ready” mode. It was not immediately obvious to users that this was the case.








  • I’m not familiar enough with Cloudflare’s error messages — or deployment with Cloudflare — to know what exact behavior that corresponds to, but I’d guess that most likely it can open a TCP connection to port 443 on what it thinks is your server, but it’s not getting HTTPS on that port or your server isn’t configured to serve up the right certificate for that hostname or the web server software running on it is otherwise broken. Might be some sort of intervening firewall.

    I don’t know where your actual server is, may not even be accessible to me. But if you have a Linux machine that can talk to it directly – including, perhaps, the server itself – you should be able to see what certificate it’s handing back via:

    $ openssl s_client -showcerts -servername akaris.space IP-address-of-actual-server:443
    

    That’ll try to establish a TLS connection, will send the specified server name so that if you’re using vhosting on the server, it knows which site to return, and then will tell you what certificate the web server used. Would probably be my first diagnostic step if I thought that there was a problem with the TLS handshake on a machine I was running.

    That might provide enough information to you to let you resolve the issue yourself.

    Beyond that, trying to provide much more information probably isn’t possible without more information about how your server is set up and what actually is working. You can censor IP addresses if you want to keep that private.




  • Sure, but I think that the type of game is a pretty big input. Existing generative AI isn’t great at portraying a consistent figure in multiple poses and from multiple angles, which is something that many games are going to want to do.

    On the other hand, I’ve also played text-oriented interactive fiction where there’s a single illustration for each character. For that, it’d be a good match.

    AI-based speech synth isn’t as good as human voice acting, but it’s gotten pretty decent if you don’t need to be able to put lots of emotion into things. It’s not capable of, say, doing Transistor, which relied a lot on the voice acting. But it could be a very good choice to add new material for a character in an old game where the actor may not be around or who may have had their voice change.

    I’ve been very impressed with AI upscaling. I think that upscaling textures and other assets probably has a lot of potential to take advantage of higher resolution screens. Maybe one might need a bit of human intervention, but a factor of 2 increase is something that I’ve found that the software can do pretty well without much involvement.



  • I’m assuming that it’s some sort of component from the air conditioner, but damned if I know what it is. Looks like power plugs on it, and someone else mentioned “caps”, so maybe a capacitor, though I wasn’t aware that there was some kind of plug standard for large removable capacitors.

    kagis

    Yeah, this capacitor looks similar.

    EDIT: Apparently air conditioners can use large capacitors:

    https://www.amazon.com/Capacitor-Conditioner-Multi-Purpose-Capacitor-5-Warranty/dp/B092ZQ3Y3N

    Capacitor for Air Conditioner 5 uf MFD 370 or 440 Volt VAC, Multi-Purpose Round Capacitor for AC Motor Run or Fan Motor Start or Condenser Straight

    EDIT2: Oh, I bet I know what it’s for, given the “Fan Motor Start” and what I assume is a misspelled “Condenser Start” text on the Amazon listing. Some hardware will draw a lot of juice when starting up. Laser printers are prone to this, for example. The references above are to mechanical things, moving components, and maybe one need extra power to overcome static friction, to get the parts in motion initially; once moving, they face (lesser) kinetic friction. One option is to just draw a ton of power from the line, but then that increases the peak power demands of a device. Another option, gentler on whatever circuit or external power source is providing the power, is to charge a capacitor for a bit and that’ll let you create a big surge of available power for a moment without having to have higher peak demands on the external power source. Adds to device cost, but limits its peak draw.


  • I’m sorry, you are correct. The syntax and interface mirrors docker, and one can run ollama in Docker, so I’d thought that it was a thin wrapper around Docker, but I just went to check, and you are right — it’s not running in Docker by default. Sorry, folks! Guess now I’ve got one more thing to look into getting inside a container myself.



  • tal@lemmy.todaytoSelfhosted@lemmy.worldI've just created c/Ollama!
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    15 days ago

    While I don’t think that llama.cpp is specifically a special risk, I think that running generative AI software in a container is probably a good idea. It’s a rapidly-moving field with a lot of people contributing a lot of code that very quickly gets run on a lot of systems by a lot of people. There’s been malware that’s shown up in extensions for (for example) ComfyUI. And the software really doesn’t need to poke around at outside data.

    Also, because the software has to touch the GPU, it needs a certain amount of outside access. Containerizing that takes some extra effort.

    https://old.reddit.com/r/comfyui/comments/1hjnf8s/psa_please_secure_your_comfyui_instance/

    ComfyUI users has been hit time and time again with malware from custom nodes or their dependencies. If you’re just using the vanilla nodes, or nodes you’ve personally developed yourself or vet yourself every update, then you’re fine. But you’re probably using custom nodes. They’re the great thing about ComfyUI, but also its great security weakness.

    Half a year ago the LLMVISION node was found to contain an info stealer. Just this month the ultralytics library, used in custom nodes like the Impact nodes, was compromised, and a cryptominer was shipped to thousands of users.

    Granted, the developers have been doing their best to try to help all involved by spreading awareness of the malware and by setting up an automated scanner to inform users if they’ve been affected, but what’s better than knowing how to get rid of the malware is not getting the malware at all. ’

    Why Containerization is a solution

    So what can you do to secure ComfyUI, which has a main selling point of being able to use nodes with arbitrary code in them? I propose a band-aid solution that, I think, isn’t horribly difficult to implement that significantly reduces your attack surface for malicious nodes or their dependencies: containerization.

    Ollama means sticking llama.cpp in a Docker container, and that is, I think, a positive thing.

    If there were a close analog to ollama, like some software package that could take a given LLM model and run in podman or Docker or something, I think that that’d be great. But I think that putting the software in a container is probably a good move relative to running it uncontainerized.



  • I like self checkout. I struggle with talking to people and it can really drain on me so it’s a godsend to have if I only need to run in for a few things.

    Valid take.

    That being said, I’d probably prefer human checkout unless we can get a more-automated form of self checkout. Self checkouts have gotten a lot better since the early days, but human checkers are still faster than I am at the self-checkout and if a human is doing the checkout, I can dick around on my phone or whatever.

    Cost savings are nice, but cost savings on my groceries just aren’t a massive concern for me. There just isn’t that much human time being expended on checking my back out. I don’t have strong feelings about the human interaction one way or another.

    Maybe one day, we can get some sort of robotic arm setup that can do checkouts as well as a human checker, and then I’d quite happily be in the “machine” camp.


  • If you had the wedding photos in question professionally taken, it might be that the photographer, if they’re still around, might have copies. I don’t know whether they retain copies, but I suppose asking can’t hurt.

    This place says up to a year:

    https://www.wanderlustportraits.com/how-long-photographers-keep-photos/

    Photographers typically keep photos of their clients for a minimum of 90 days and up to a full year as part of standard practice; however, if this is important to you, review the contract and ask your professional.

    This guy says forever:

    https://old.reddit.com/r/WeddingPhotography/comments/96ckow/how_long_do_you_hold_on_past_wedding_photos/

    I keep ALL files on two 16tb drives drives. Those drives never get wiped and I will always keep two copies even when they fill up. One internal on sata for reference and one off site. When I first started shouting, I was cheap and deleted RAWs and just kept high res jpegs. I have clients coming back for albums and I am stuck re-editing the jpegs to match in the albums. Lesson learned. If you do want to consolidate, then keep the RAWs of the editor we jpegs and delete the unused. But that’s more hassle than the cost to store unused raws. You can also rely on cloud source but you never know if you’ll ever switch cloud servers or move onto another business on want to stop paying cloud fees. For the high volume photographers it becomes wise to invest in tape drives. HDD have lives of 10 years. So eventually all those old drives will need to be transferred to newer drives. Budget this into your bottom line


  • I was consolidating data from multiple old drives before a major move—drives I had to discard due to space and relocation constraints. The plan was simple: upload to OneDrive, then transfer to a new drive later.

    I’m assuming that the reason that he didn’t just do the transfer to a new drive instead of to OneDrive (which seems like it’d be more-straightforward) is because the new drive was going to also be a system disk, not just hold his data.

    I think that it would have been a good idea to get a second new drive and have done that transfer just so that there’s a backup. I mean, it doesn’t really sound like the user was planning to wind up with a backup of his data, or for that matter, that he had a backup to start with.

    Maybe OneDrive locking the account was unexpected, but drives can fail or be inadvertently erased or whatever. If you’ve got thirty years of irreplaceable data that you really badly want to keep, I’d want to have more than one copy of it. The cost of a drive to store it is not large compared to the cost involved in producing said data.