Jokes on you, in most cases I deploy in 2-3 seconds.
Jokes on you, in most cases I deploy in 2-3 seconds.
Comes down to personal preferences really. Personally I have been running truenas since the freebsd days and its always been on bare metal. There would be no reason you could not virtualize it, and I have seen it done.
I do run a pfsense virtualized on my proxmox VM machine. It runs great once I figured out all the hardware pass through settings. I do the same with GPU pass through for a retro gaming machine on the same proxmox machine.
The only thing I dont like is that when you reboot your proxmox machine the PCI devices dont retain their mapping ids. So a PCI NIC card I have in the machine causes the pfsense machine not to start.
The one thing to take into account with Unraid vs TrueNAS is the difference between how they do RAID. Unraid always drives of different sizes in its setup, but it does not provide the same redundancy as TrueNAS. Truenas requires disk be the same size inside a vdev, but you can have multiple vdevs in one large pool. One vdev can be 5 drives of 10tb and the other vdev can be 5 drives of 2tb. You can always swap any drive in truenas with a larger drive, but it will only be as big as the smallest disk in the vdev.
Intel Core i5 CPU 750 @ 2.67GHz with 16gb ram 165TB of storage. Motherboard is a Asus Delux 10+ years old. And a 10gb NIC. All inside a fractal Design XL case.
The hardware is by all means not top of the line, but you dont need much for a NAS.
I personally run truenas on a standalone system to act as my NAS network wide. It never goes offline and is up near 24/7 except when I need to pull a dead drive.
Unraid is my go to right now for self hosting as its learning curve for docker containers is fairly easy. I find I reboot the system from time to time so its not something I use for a daily NAS solution.
Proxmox I run as well on a standalone system. This is my go to for VM instances. Really easy to spin up any OS I would need for any purpose. I run things like home assistant for example on this machine. And its uptime is 24/7.
Each operating system has its advantages, and all three could potentially do the same things. Though I do find a containered approche prevents long periods of downtime if one system goes offline.
No worries, VMware or some of the other virtualization software’s should work in this case as most other comments pointed out. Probably the most simple and straight to the point.
If you have the urge to tinker, another potential item or route you can look at is a proxmox machine. You can run multiple VMs in tandem at the same time. This would run on a standalone machine.
You would then be able to remote desktop into any virtualized OS on your home network. You can use a software like parsec which I like to access each machine from a clean interface.
I run a Hackintosh’s dual booting Mac OS and Windows. So you solution is not insane as some have pointed out.
What I would suggest is maybe running a NAS on your local network to act as your share. Obviously this won’t help if you dont store your working files on your NAS, but its an idea. I know no way to directly share between the two machines as they are technically not on at the same time.
Secret to the US public?
I have tested both lingding and linkwarden. Lingding was easy to use and did the basics in bookmark management. Though I settled on linkwarden for its saving of webpages in different formats with folder and subfolder organisation in the UI.
Both are good options, but linkwarden seem to be more power user focused.
Intelligent Speed Assistance is great, went to Spain a few years back and essentially the car would know the limit of each road and give you a little signal/sound each time you went over. Great feature tbh, took about a day to get used to it at first but after that it was smooth sailing.
I wish cars would get speed limiters installed. Trucks and trailers especially, why does a truck try and overtake a car anyway? Or another truck?
Why stop only at e-bikes? Get them installed inside mobility scooters as well, slow down Grandma! /s
I would find this interesting and useful as well, especially as one of the things holding me back from ditching chrome all together is all my bookmarks.
Would love to somehow import them all into linkwarden to have a centralized bookmark location.
Seems like the N100 is your option if you are only choosing between these two. Personally I am in the same both as others here, where desktop hardware is my preference at the moment especially if I can find combo deals for mombo/cpu.
Though my recommendation is to consider a board that would support PCIe for a potential LSI HBA card, stay away from any other sata expansion cards unless you don’t value your data.
If you do ever pick up a LSI HBA card with support for either 8/12/24 drives I would also state to plug the whole pool into this card and not mix and match between onboard SATA connections and the card.
A boot drive can still connect to a SATA connection on the board as it not part of the pool.
I’m running my NAS on a 12 year old motherboard with 16gb of ram the max the board supports. Though I wish I could bump this up now after running this system for 9 years.
I would recommend having a board with at least a PCIe slot so if you ever need more drives you can plug them all into a HBA Card. My board has 3 and I use 2 of them at the moment. One for the HBA card that supports 24 drives and another for a 10gb NIC.
The third I would probably use to add another HBA card if I expand drive quantities.
I got the same setup with eight 18TB Exos drives running in a RAIDz2 with an extra spare. Added to this though I got another vdev of eight 12 WD reds with another spare.
With this I can have 2 drives fail in a vdev at any point and still rebuild the pool. Though if more than 2 drives all fail at the same time the whole pool is gone.
But if that happens I have a second NAS offsite at my bro’s place that I backup specific datasets. This is connected with tailscale and a zfs replication task.
You have an excellent point, it seems like tailscale would have a larger attack surface.
I wonder if credentials are hashed in some way on tailscale servers, so even with an attacker gaining access to their servers it would essentially be useless to them.
My setup consists of the following:
Unraid, most services I self host run in docker here. Things like plex/jellyfin, nextcloud, unifi could controller.
Proxmox, used to virtualize my pfsense after I moved away from my unifi USG router. A few Linux and Debian headless virtual machines run here as well. Had pihole virtualized here as well but switched over to pfBlockerNG to consolidate.
TrueNAS, all my media shares. I also sync my desktop environments here to have a consistent windows desktop across my desktops and laptops.
Home assistant running on home assistant yellow. Runs a few add-on services.
Tailscale would be the most “secure” as you have no ports open and only you can access it. Keep in mind your services will only be accessible by you along as all your devices connect to your tailscale instance. Sharing access is possible but will require some explanation.
Wireguard is another option, just as secure as the first option, it will need one port open but the port only responds if you are connecting with proper keys/authentication. Like tailscale you can only access your services if connected to your wireguard instance.
Reverse Proxy, any version you choose will work, it depends on your preference of layout and user interface. Nginx proxy manager, haproxy, traefik. Each accomplish the same with different levels of setup, I listed them in my ease of use. If you use pfsense as your router haproxy installation is easy and there are plenty of guides about setup. Nginx proxy manager you can also find a bunch of setup videos where it’s running on home assistant.
With a reverse proxy you will open port 443 and in your firewall rules point it at your reverse proxy. Your proxy will then direct traffic to any one of your services. You will need a domain name so you can access service1.mydomain.com or service2.mydomain.com from anywhere on the web.
With a reverse proxy and any public website I recommend to run them behind a ddns like CloudFlare. You can do this for free and it helps protect your services against DDoS, bots/crawlers, and it obscures your HomeLab IP, as all incoming traffic goes through CloudFlare and then get directed to your HomeLab.
Additional security that can be implemented within your firewall is to block all traffic not originating from your country, or even only allow specific IP addresses.
I use a combination of all this above where a few services run publicly accessible, and everything else is accessed through tailscale or wireguard. Internally I run haproxy on pfsense where public service are proxied.
I also run nginx proxy manager for my local services, this allows me to access my local services such as service1.local.mydomain.com with a full SSL certificate. So once I connect to my home network with tailscale/wireguard I can type in these domain names into my browser. At some point I will move these into haproxy with its own frontend for internal services.
If it was in a glass bottle maybe I would buy it. Plastic pagaking should be more expensive IMO
Any chance Jellyfin and Finamp have a music playlist and mix building feature?
Plex has this with Plexamp but I have not had a chance to look into jellyfin to see if a plugin offers something similar.
I hate building playlists, Plex offers a few different options like sonic sage, sonic adventure, artist mix builder, and automatic mixes based on past listening history.