I know ghost has a container deployment and uses activity pub
I know ghost has a container deployment and uses activity pub
I do wonder how many within the man/woman responses are trans, too.
Idk if that survey was mainly advertised on lemmy, but i know that at least one instance that did a survey had maybe 2% woman respondents, but more than two thirds of those were transfem.
Either way, a little disconcerting. I’m not sure what to make of that or what (if anything) to do about it
Lots of good suggestions here
I’m a bit surprised by your budget. For something just running plex and next cloud, you shouldn’t need a 6 or even 3k system. I run my server on found parts, adding up to just $600-$700 dollars including (used) SAS drives. It runs probably a dozen docker containers, a dns server, and homeassistant. I don’t even remember what cpu I have because it was such a small consideration when I was finding parts.
I’d recommend keeping g your synology as a simple Nas (maybe next cloud too, depending on how you’re using it) and then get a second box with whatever you need for plex. Unless you’re transcoding multiple 4k videos at once, your cpu/GPU really don’t need much power. I don’t even have a dedicated GPU in mine, but I’m basically unable to do live 4k transcodes (this is fine for me)
If Amazon started charging for smart-home solutions, they’d essentially be making the case for FOSS solutions like home assistant.
Granted, there will always be a contingent of people who are unwilling to learn how to self-manage that tech, but there are certainly enough people who are willing that they should think twice about heading down that path.
Yup, I ended up frankensteining a nas from various craigslist parts (i actually found a low-power business-class server motherboard that has worked out well for the purpose). Had to get a SAS HBA card and a couple SFF-8087 cables to do the job right, and I grabbed an old gaming case from the 2010’s to hold it all, but it was relatively seamless. I had one of the drives go out already, but luckily I had it in a raid configuration with parity so it was just a matter of swapping out the drives and rebuilding.
It’s been fun and rewarding, for sure! I’m glad I didn’t sell them like these other dweebs told me to lol
"…Are we the baddies? "
Yea, I’m with ya. Some people interpreted this as marketing hype, and while I agree with them that mysticism around AI is driven by this kind of reporting I think there’s very much legitimacy to the uncertainty of the field at present.
If everyone understood it as experimental I think it would be a lot more bearable.
Knowing what you’ve made is different to understanding what it does.
Agree, but also - understanding what it does is different to understanding how it does it.
It is not a misrepresentation to say ‘we have no way of observing how this particular arrangement of ML nodes respond to a specific input that is different to another arrangement’ - the best we can do is probe the network like we do with neuron clusters and see what each part does under different stimuli. That uncertainty is meaningful, because without having a way to understand how small changes to the structure result in apparently very large differences in output we’re basically just groping around in the dark. We can observe differences in the outputs of two different models but we can’t meaningfully see the node activity in any way that makes sense or is helpful. The things we don’t know about LLM’s are some of the same things we don’t know about neuro-biology, and just as significant to remedying dysfunctions and limits to both.
The fear is that even if we believe what we’ve made thus far is an inert but elaborate rube goldberg machine (that’s prone to abuse and outright fabrication) that looks like ‘intelligence’, we still don’t know if:
It’s frustrating that this field is getting so much more attention and resources than I think it warrants, and the reason it’s getting so much attention in a capitalist system is honestly enraging. But it doesn’t make the field any less intriguing, and I wish all discussions of it didn’t immediately get dismissed as overhyped techbro garbage.
Maybe a less challenging way of looking at it would be:
We are surprised at how much of subjective human intuition can be replicated using simple predictive algorithms
instead of
We don’t know how this model learned to code
Either way, the technique is yielding much better results than what could have been reasonably expected at the outset.
an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.
The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.
That’s exactly what I mean.
Look, I get that we all are very skeptical and cynical about the usefulness and ethics of AI, but can we stop with the reactive headlines?
Saying we know how AI works because it’s ‘just predicting the next word’ is like saying I know how nuclear energy works because it’s ‘just a hot stick of metal in a boiler’
Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do. That’s not marketing hype, that’s just an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.
I hate that we can’t just be mildly curious about ai, rather than either extremely excited or extremely cynical.
Yup, I was only pointing out that i was having trouble doing the same thing in my docker compose (using the webui_port env variable did not avoid port collisions at deployment)
I haven’t tried this particular compose outline though. It could also be the pirate_network they’re initiating or the depends_on variables they’re using, I just haven’t played around with it yet.
Question: how are you deploying your arr apps? do you do that in a separate compose file?
AFAIK the thing that complicates this is trying to run it behind gluetun
docker makes it really easy to specify a unique port on deployment, but when you’re using a network bridge (as in the case of gluetun) the networking settings are controlled there instead, so you can’t use the normal port declarations. It’s apparently not impossible to do it with gluetun but it seems it’s not as straightforward.
lmao. I’m starting to really wonder what the WEBGUI_PORT variable does if not exactly what you’re changing in the GUI… someone else mentioned they got multiple instances to deploy from the same compose file by placing the gluetun service at the end of the file. I wonder if the order in which the containers are deployed is the thing that makes this work. i’ll test more when I have the time
I might need to try this… I wonder if it makes a difference that the gluetun service is listed last. I noticed that trying to start the containers in the wrong order results in port collision errors, maybe this is why it works for you?
This worked!!
Shame that it’s a little bit of a runaround, but not only did this work, it also persists after restarts and updates.
I’ll be editing my post and offering it as a solution to the other places I have seen this question asked, thank you a ton!
I’m looking at hotio now.
their documentation isn’t as comprehensive as linuxserver.io, i’ll probably have to just try it out and see if it works. looks like they also have one that has wireguard bundled but it’s really unclear how that works
You misunderstand, I only mean that it’s disconcerting that there may be some reason that cis-women do not find the hobby/group appealing