Restic and borg are both sorta considered 'standard' for doing incremental backups beyond filesystem snapshotting.
I use restic and it automatically handles stuff like snapshotting, compression, deduplication, and encryption for you.
Restic and borg are both sorta considered 'standard' for doing incremental backups beyond filesystem snapshotting.
I use restic and it automatically handles stuff like snapshotting, compression, deduplication, and encryption for you.
DigitalOcean and Vultr are options that "just work" and have reasonable options available in $5-6/month category.
DO is more established and I've used them for nearly 10 years now for a $6/mo VPS and for managing DNS for my domains. Vultr has some much closer datacenter options if you happen to be in the southeast US, rather than basically just covering California and NYC like DO does.
Given how common it is for people to use the 'reset password' link for this exact purpose, it does make it seem kinda redundant to even implement passwords on many services to begin with.
People recommend backblaze B2 as a restic/rclone/borg backend because it works extremely well and is an excellent value compared to other available options at a near-flat $6/TB*month rate.
The reason they 'force linux users to use their b2 product' is very specifically done, on purpose, to avoid the exact kind of abuse you want to do, which is upload 18TB of near-incompressible data for them to store for $9/month or less.
Buy a 20TB harddrive and keep it in a fireproof filebox, and maybe another to keep at a friends house. You don't need cloud backups for media you can reaquire relatively easily, save that for the stuff you can't trivially replace.
What CPU governor are you using? I saved about 40W idle powerdraw switching to powersave vs the default on a Ryzen 9 3900X.
I ran RAID-Z2 across 4x14TB and a (4+8)TB LVM LV for close to a year before finally swapping the (4+8)TB LV for a 5th 14TB drive for via zpool replace
without issue. I did, however, make sure to use RAID-Z2 rather than Z1 to account for said shenanigans out of an abundance of caution and I would highly recommend doing the same. That is to say, the extra 2x2TB would be good additional parity, but I would only consider it as additional parity, not the only parity.
Based on fairly unscientific testing from before and after, it did not appear to meaningfully affect performance.
125W (Less than $15/month) or so for
I generally leave
powerManagement.cpuFreqGovernor = "powersave"
in my Nix config as well, which saves about 40W ($4/mo or so) for my typical load as best as I can tell, and I disable it if I'm doing bulk data processing on a time crunch.
Realistically, the target audience are organizations as nowadays most business laptops are being carried between docking stations with the occasional meeting or air travel in-between and 13" is an excellent size to meet those needs.
When hooked to a docking station, the screen size and keyboard is entirely irrelevant and modern laptop performance is...honestly crazy good.
When in a meeting, it's probably being either used to take notes fullscreen or show a presentation, so pretty neutral.
Finally, when traveling, you can really can feel the difference between a 13" and a 15" when you're running on too short of a layover between flights.
My partner and I use a git repository on our self-hosted gitea instance for household management.
Issue tracker and kanban boards for task management, wiki for documentation, and some infrastructure components are version controlled in the repo itself. You could almost certainly get away with just the issue tracker.
Home Assistant (also self-hosted) provides the ability to easily and automatically create issues based on schedules and sensor data, like creating a git issue when when weather conditions tomorrow may necessitate checking this afternoon that nothing gets left out in the rain.
Matrix (also self-hosted) lets Gitea and Home Assistant bully us into remembering to do things we might have forgotten. (Send a second notification if the washer finished 15 minutes ago, but the dryer never started)
It’s been fantastic being able to create git issues for honey-dos as well as having the automations for creating issues for recurring tasks. “Hey we need to take X to the vet for Y sometime next week” “Oh yeah, can you go ahead and put in a ticket?” And vice versa.
what does industry do when they need to automate provisioning of thousands of devices for POS, retail, barcode scanning, delivery drivers, etc.
MDM doesn't help with the kind of stuff OP is trying to automate, but it does usually cover most business use cases and if you need more than that, you generally either have a contract to get the manufacturer to do it for you or just put what you need into the org-specific superapp you already have to have.
Oh nice a nicely-formatted list of reasons I don't switch phones more frequently than once every 5 years: I loathe setting them up as specifically as I want them to behave
I've read many many discussions about why manufacturers would list such a pessimistic number on their datasheets over the years and haven't really come any closer to understanding why it would be listed that way, when you can trivially prove how pessimistic it is by repeatedly running badblocks on a dozen of large (20TB+) enterprise drives that will nearly all dutifully accept hundreds of TBs written to and read from with no issues when the URE rate suggests that would result in a dozen UREs on average.
I conjecture, without any specific evidence, that it might be an accurate value with respect to some inherent physical property of the platters themselves that manufactures can and do measure that hasn't improved considerably, but has long been abstracted away by increaed redundancy and error correction at the sector level that result in much more reliable effective performance, but the raw quantity is still used for some internal historical/comparative reason rather than being replaced by the effective value that matters more directly to users.