• 0 Posts
  • 43 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

    I can’t see why regular file would be any different.

    I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

    I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

    I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

    I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

    Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


  • 3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.


  • This sounds unbelievable, like the turning of a ship to avoid an iceberg. It’s an unbelievably light sentencing, showcasing the country’s lack of interest in protecting women’s rights while declaring the intent to do so in the ruling.

    If my partner was attacked, lost her hearing and had to attend court multiple times to defend her rights to safety, and the perpetrator got 3 years? I’d be furious.

    I know she’d be devastated. The times she felt unsafe already leave such a big impact, let alone a realised attack.

    Anyway. I do hope it’s just a positive sign, that all it will take is a bit more time. I want to believe it’s positive. But it’s wild to compare what I’d like to believe as obvious human rights; to not be attacked to the point of disability from an unprovoked human, then believe in the justice system in arrears to punish and (theoretically) prevent.

    Anyway, long rant. Processing it because I probably believed Korea was better than that. Not all the humans, just at least the culture and law.




  • Other then legacy and uefi does it have a CSM compatibility support mode? An option to enable usb initialisation before bios? Eg wait for usb initialisation?

    Some “boot faster” options kind of reorder boot initialisation to a point where it’s not holding the system back.

    Though I’m really running out of suggestions… I can imagine you’re pretty frustrated. I know my Dell laptop was a pain to get the right settings to get usb to boot and the stupid 100db beep to silent on boot interruption.



  • I suggest a few more things:

    Try a different brand usb. Different motherboards sometimes don’t support some usb brands. In fact, a Lenovo server I rebuilt refused to boot off certain usbs.

    Some motherboards don’t initialise boot off some usb ports. Sometimes the additional ports are on another controller and initialise too slow.

    Just try a straight working Ubuntu live boot usb to remove any ventoy from equation. Ubuntu has real signed uefi (and no shim) granted by Microsoft. I think that’s how it works, uefi is a mess.

    Try to start isolating all the different factors, and there could be more. It doesn’t necessarily mean anything definitive if it works on another machine.



  • For me I want to know how much frame latency there is since I’m suspicious and I want to try things to see the effect and I just don’t know how to get that information in an OSD like I can with msi afterburner.

    If someone knows what can do this in Linux, please reply!

    Instead I just stopped all competitive and cooperative gaming. Which is a bit of a shame. Sometimes I’ll load up windows to join friends but usually by the time I’ve updated whatever game I’ve gotten over it.

    Don’t get me wrong, hiccups aside I’m very happy which is why I’m in Linux most of the time. But it’s not always a wonderful world.




  • Well, what I really wonder is if because the kernel can include it, if this will make an install more agnostic. Like literally pull my disk out of a gaming nvidia machine, and plug it into my AMD machine with full working graphics. If so this is good for me since I use a usb-c nvme ssd for my os to boot from on my work and home machines and laptops for when I’m not worrying. All three currently have nvidia cards and this works ok. I have some games to chill and take a break. My works core OS for work MDM etc unmodified. I like it that way.

    I realise this is not a terribly useful case, but I could see it for graphically optimised VM migrations too not that I have many. Less work in transitioning gives greater flexibility.



  • Sorry to clarify: updates come as security or as feature updates. If I’ve already got a standard operating environment (SOE) with all the features I/staff need to do work, I don’t need new features.

    I then have to watch cves with my cve trackers to know when software updates are needed and all devices with those software get updated and the SOE is updated.

    I can go on a rant about how bad the Linux has recently made my life as someone’s policy is that any Linux bug might be a security vulnerability and therefore I now have infinite noise in my cve feed, which in turn is making decisions on how to mitigate security issues hard, but that is beyond this discussion.

    So in short I’m only talking about when you update, updating only security fixes, not the software and features. Live patching security vulnerabilities is pretty much free low effort, low impact, and in my personal opinion, absolutely critical. But software features patching can be disruptive, leaves little to be gained, and really only should be driven for a request to need that feature at which point it would also include an update to the SOE.





  • They probably have been using it for years, and for the last more then a decade I’ve been using Ubuntu as my main Linux distribution since I have work to do and I’ll get to doing work faster in ubuntu than any other distribution.

    Why did I start with Ubuntu? 10+ years ago Ubuntu was lightyears ahead for community support for issues. Again, I had work to do, I wasn’t hobbyist playing “fuck windows”.

    In fact look at things like ROS where you can get going with “apt install ros-noetic-desktop” and now you can build your robotics stuff instantly. Every dependency to start and all the other tooling is there too. Sure a bunch of people would now say “use nix” but my autonomous robotics project doesn’t care I am trying to get lidar, camera, motors, and SLAM algorithms to work. I don’t want to care or think about compiling ROS for some arch distribution.

    I won’t say I don’t dabble with other distributions but if I’ve got work to do, I’m going to use the tools I already know better than the back of my hand. And at the time, when selecting these tools, Ubuntu had it answered and is stable enough to have been unchanging for basically a decade.

    Oh and if I needed to, I could pay and get support so the CEO can hear that risk is gone too (despite almost every other vendor we pay never actually resolving a issue before we find and fix it… Though I do like also being able to say “we have raised a ticket with vendor x and am waiting on a reply”).


  • From my perspective, if used for work, automatic security updates should be mandatory. Linux is damn impressive with live patch. With thousands or even tens of thousands of endpoints, it’s negligent to not patch.

    Features? Don’t care. But security updates are essential in a large organisation.

    The worst part of the Linux fan base is the users who hate forced updates, and also don’t believe in AV. Ok on your home network that’s not very risky compared to a corp network with a million student and staff personal information often with byo devices only a network segment away and APT groups targeting you because they know your reputation is worth something to ransom.