

Movies, TV, porn. That’s what I started with anyway…
20 up isn’t terrible; I made do with 10 for 8 years, and that included hosting said movies and shows to friends.
Hello! Some info about me is up on my website: https://wreckedcarzz.com/


Movies, TV, porn. That’s what I started with anyway…
20 up isn’t terrible; I made do with 10 for 8 years, and that included hosting said movies and shows to friends.


So you’re offering to manage my ~40 services, and make sure that all the dependencies are met - and none conflict…?
I mean, I enjoy hosting things myself, but I’m not going to invite issues that have been resolved by simple solutions. I’ve been around the block with dependency hell, fuck all of that. Now if I was getting paid like 6 figures instead of zero, sure boss, whatever the fuck you say boss, job security all day long. But unless you’re offering, I’m sticking with the easy way.


I don’t see any options or mention of changing instances, beyond discord canary and public testing…? I might be blind


It’s been around a few years. I investigated it last year. It had a name change some times ago.
I can’t vouch for the code quality, but it’s too old to be slop.


I set this container up yesterday. Technically it’s running. But all the settings are in the fucking sql db, and I know fuck all about sql other than drop tables is funny meme from xkcd. But also, ignoring the settings, I would like to point out that there is effectively no client. I mean, there are two official ones - the depreciated one, and the alpha one, and the alpha one has a total of 4 releases with the newest being two years ago. How do you deprecate a client when the server is still in alpha? What the fuck? And on all pages it screams ‘this is alpha testing software, do not use as a daily’. Also the docs are, uhh… rough. If rough was falling 4 stories into a bed of poisonous cacti. It took me 3 hours to get the container running properly and finally poking at the db. It’s as organized as my bedroom (‘it’s somewhere in this dresser, I think…’).
The idea, the potential, is brilliant. Literally everything about getting it working though…
I have a owncast container setup, I’ve used it a few times. It combines a customizable webpage with the stream, kinda-sorta like a twitch page. Hook obs to it and you’re off. Took me a couple hours to get everything set.
My only complaint is that the stream will fall behind - not sure if obs or oc is to blame. Perhaps my nas being underpowered, though I was testing/watching with ‘source’ so it shouldn’t be transcoding. After an hour or two I can, watching my own stream, see it’s fallen back by like a minute. If I remember right it continues linearly, so more time = more discrepancy.
It’s nice though, so I haven’t bothered to try other solutions. I should re-test and see if it’s been fixed…


You vastly overestimate boomers-era individuals (and really the entire general population). Beyond turning things on and ‘everything magically works’, most know fuck all about tech.
I know that if I croak tomorrow, while my ex partners and a couple friends would be able to piece together things, 1) they’d have to be informed that I’m dead, 2) they’d have to be asked to help with my different hosts, and 3) they’d need to remember where I physically put the password in case of emergency to access the main host (with all of the family’s important shit, like all of it). Assuming they got those three things done, they would have to convey to the ex/friend how to access the main node, and then figure out my password manager master password, and the mfa (multiple options), or assume it’s inaccessible and use the physical password to retrieve the data and restore… on an OS none of them has ever used before.
Assuming all that is doable, after the restore is to maintain the system and the containers, perpetually, as well as continue paying for the domains so they can access the services hosted on the nodes, and continue paying for my vps and the backup storage strategy (two different companies on two different continents alongside the local copy).
As I have literally almost died before (I was supposed to have died, according to doctors who saved me), I have tried to make this hypothetical situation easy, and still it would astonish me if they get past like step #2.


I might send you an email about that; I commented above (same parent comment) about having Windows server 2022 at home. I haven’t tried to connect the vm to xpipe (no need as of yet), but I would have been bummed out at the requirement.
(it’s running on a second-hand ThinkServer tower, tucked away under a table :P)


As per https://xpipe.io/pricing :
The following systems are classified as enterprise operating systems within XPipe and connections to those systems are only possible starting from the professional plan:
(which I didn’t know about and is a bummer since I have a w10 2022 sever datacenter vm, but it’s single-purpose so I don’t need to interact with it much at all)
Tossing my hat for +1 since I use xpipe for data transferring and basic stuff between systems, have been for like… 2 years now? It’s super handy.


Have the system do something intensive and see how much the temps climb. Let it work for a few minutes and see - that will tell you if your system is thermal throttling or not.


Yeah, that all looks okay. Did you put the system under heavy load while checking/monitoring?


Thermal throttling is when the system (usually the cpu) becomes so hot from the lack of cooling provided to it, that it limits its performance to save itself from certain death - going so far as performing a hard shutdown if the situation doesn’t improve or stabilize. AMD chips usually throttle at 80C, Intel chips 100C, but it could be a few different components. You need to run software that can properly read and report the temperature of various parts in your system to see if you might be hitting the throttling threshold.
I know software to do this for windows, but not any for *nix.


My Pixel dis/enables it on a press. Long press for devices.


I was 110% a Google fan until about 9 years ago, when Project Fi was headed towards public release, and devices purchased from them (effectively through the Google Store with a different sticker on the box) kept mysteriously going missing in transit, and the customer got an empty box/a brick/whatever. This is annoying on its own, but G/Fi cs response to these instances were awful. As I was a Fi user (woo closed beta gang), and I bought my devices thru Fi, I became concerned that this might happen to me. That thought quickly snowballed, and I started migrating out of the G ecosystem because the realization that their cs is useless and if I ever have a dispute or situation, they can just delete my account without giving a single fuck.
To pay a company money and not receive some assurances that my data won’t be wiped out of the blue while I am asleep, is - in my opinion - fucking stupid as hell. I don’t know how the apple situation is, but until proven otherwise, fuck both of them for anything you care about.


I have bought several in the last decade. I’m techy and disabled, and wanted to help out around the house. I have bought from multiple manufacturer but only purchase their top-tier offering, as I want to replace vacuuming, not just compliment it. We have pulled the manual vac out three times in 9 years.
The cheaper ones are meh, but the expensive ones can truly replace vacuuming and mopping. My issue is that, across… 5 brands, none of them have lasted longer than 2 years, often much shorter lifespans. I recently bought a Roborock with an extended warranty from RR themselves, something none of the others offer, so I’m hoping to be using it for several years to come.


Not to support this cloud-only system, but I used to own an iR (several, actually) and they can clean the entire space, pause, and cancel/dock with physical buttons.
Though it loses a large chunk of its smarts without a connection. No floor plan retention, no room selection, no 1 pass/2 pass, no knowledge about no-go lines and zones, no adjustable suction based on room…


Not personal, just venting.
As someone who arrrr’d CoD4, MW2, a couple others I don’t remember, and then a friend bought me BO3 for the co-op zombies last year (and my god that menu system is a massive piece of shit), I tried the public beta of 7 like a month ago…
I had to make an account, agree to… 5, I think, different bullshit terms of fuck-you-pay-me, enable SB, figure out the awful menu system again (I’m noticing a trend here), and while the zombies mode was… tolerable, it’s basically the same thing as prior games, just with a new price tag. And, what particularly grinds my gears (assuming the list thus far isn’t bad enough), you cannot download the entire game? Like it requires you to stream it. I have a 4TB raid0 PCIe (add-on style) nvme card, unless the game is a literal terabyte+, I want a full copy. That drive was literally $999 when I got it a few years ago, “I paid for the entire thing and I want to use the entire thing”. Let the console players use their bandwidth to temporarily cache shit, but I have the fucking space, piss off.
Fast forward 10 years when the servers shutter and the game you paid for is missing fucking necessary assets and is thus bricked in a new moronic way. Oh, you wanted to play single player? Hook up for some LAN fun? Nah fuck you, content not available. Retry?
And this shit is $80 base? “I remember when” angry grandpa but they can get fucked. You want $80 for a game that will die in a decade, and I should be grateful for the privilege? Thank you sir, may I 'ave some more?
Maybe I’ll fire up CoD4 again, me and a friend against aimbotting bots. Fun, all local, and Activision can’t nuke it at the flick of a switch.
grumble grumble


But streaming is downloading. Sure, it’s not saved to the disk, but…


Yeah
“are you in good hands?”
(only people from the states will get this)