• 0 Posts
  • 115 Comments
Joined 11 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • I’d look at getting a used SFF (Small Form Factor) desktop for a LOT less than that Ugreen. I paid less than $50 for mine - at that price I can run a second one when I’m ready.

    I’m currently running an old Dell SFF as my server, I’ve had Proxmox on it with 5 drives internally (2.5") with the OS on the NVME.

    Initially it had 4GB of ram and ran Proxmox with ZFS just fine (and those drives were various ages and sizes).

    It idles at 18w, not much more than the 12w my Pi Zero W idled at, but way more powerful and capable.


  • One drive failure means an array is degraded until resilvering finishes (unless you have hot spare, at least then the array isn’t degraded and silvering a new drive isn’t as risky).

    Resilvering is an intensive process that can push other drives to fail.

    I have a ZFS system that takes the better part if a day (24 hours) to resilver a 4TB drive in an 8TB five-drive array (single parity) that’s about 70% full. When uts resilvering I have to be confident my other data stores don’t fail (I have the data locally on 2 other drives and a cloud backup).


  • “Two in RAID” only means 2 when the arrays on on different systems and the replication isn’t instant. Otherwise it only protects against hardware failures and not against you fucking up (ask me how I know…).

    If the arrays are on 2 separate systems in the same place, they’ll protect against independent hardware failures without a common cause (a drive dies, etc), but not against common threats like fire or electrical spikes.

    Also, how long does it take to return one of those systems to fully functioning with all the data of the other? This is a risk all of us seem to overlook at times.




  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldBackups of Backups
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 days ago

    The only concern I see here is the external drive. My experience has been that powered off drives fail more often than constantly-on drives. So my external drives are always powered on, I just run a replication script to them on a schedule.

    But you do have good coverage, so that’s a small risk.


  • For stuff like movies I simply use replication as my backup.

    Since I share media with fruends/family, I act as the central repository and replicate to them on a schedule (Mom on Monday, Friend 1 on Tuesday, etc), so I have a few days to catch an error. It’s not perfect but I check those replication logs weekly.

    I also have 2 local replicas of media, so I’m pretty safe.


  • You’re missing the point - he’s elevating cli above all else, which you don’t have on TV or mobile.

    Yes, I know there are media clients, I’ve used them all. And that screenshot is hideous - compare it to Jellyfin on mobile, which looks just like Netflix used to.

    Besides, he’s not doing anything different than running a “server stack” (which isn’t accurate, he’s still running a server, the device hosting the media services, even if they’re native to the OS).

    Xerox Parc didn’t invest millions in the 60’s and 70’s because CLI was so great.

    We don’t use CLI on our microwaves, toasters ovens, tv’s clocks, lights, etc, for a reason.


  • So, let me get this straight - you’re saying using command line to play video instead of a gui?

    Tell me, how does one do this on a TV? On an iPad? Phone?

    Your excitement for command line belies an experience of nothing but GUI, so it’s something of a novelty to use command line.

    Dude, get ahold of yourself. I probably wrote more command line stuff before you were born than you’ve ever thought of - I’m not going backwards.

    (As a clue, wrote my first Fortran program before PC’s were even a thought at IBM).

    Fuck cli except for managing systems. Even then quite often gui is faster by orders of magnitude, mostly to kickoff scripts to do what I need. GUI was a godsend, and Xerox Parc’s efforts created a common GUI language for us, thankfully was embraced. I refuse to go backwards.

    And forcibly teach non-technical people to use CLI?

    You are exactly the type of person that Saturday Night Live lampooned decades ago.


  • Exactly, keeping components separated, especially the router.

    Hardware routers “cost money because they save money” (Sorry, couldn’t resist that movie quote). A purpose-built router will just run and run. I have 20 year old consumer routers that still “just work”. Granted, they don’t have much in the way of capability, but they do provide a stable gateway.

    I then use two separate mesh network tools, on multiple systems. The likelihood of both of those failing simultaneously is low. But I still have a single failure point in the router, which I accept - I’ve only had a couple outright fail over 25 years, so I figure it’s a low risk.


  • Separate devices provide reliability and supportability.

    If your all-in-one device has issues, you can’t remote in to maintain it.

    Take a look at what enterprises do: redundant external interfaces, redundant services internally. You don’t necessarily need all this, but it’s worth considering "how do I ensure uptime and enable supportability and reliability? ".

    Also, we always ask “what happens if the lone SME (Subject Matter Expert) is hit by a bus?” (You are that Lone SME).



  • Yea, it’s the end of the world with Signal.

    Having such a dependcy just exposes yet another way their story doesn’t add up, like dropping SMS support because of engineering costs. Apparently, SMS is so hard to do there are free SMS apps.

    I can’t trust them at this point.

    And how does E2E require a middleman?

    More like it’s their store-and-forward servers. Why that’s on AWS, or more importantly not distributed with auto fail over is a major fail, as in “get fired” failure.




  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldDNS server
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    Ah, unbound has the root DNS servers hard coded. That’s a significant point.

    Any reason you couldn’t do the same with any other DNS server such as PiHole?

    I’m really trying to understand why I’d run two DNS servers in serial, instead of one. All this sounds like it’s just a different config that (in the case of unbound) has been built in - is there something else I’m missing that unbound does differently?

    Why couldn’t you just config the TLD’s as your upstream DNS in whatever local DNS server? Isn’t that what enterprises do?





  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldDNS server
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    Cool, thanks for the clarification. This is good info to have in here in general.

    So unbound by default discovers other DNS servers, if I’m understanding that correctly. I’ve never used it, does it not use your ISP’s DNS by default, or does that depend on user config?

    What if your PiHole is configured to use other than your ISP’s DNS?