• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
  • Same here regarding *arrs handling the data movement/layout and nfo files. I even have the “Connect” sections for each set to trigger rescans, but it seems especially for files that get replaced by a more optimal version, a duplicate is left over in kodi alongside the new one which only goes away when you try and play it. I tried switching to a dedicated mysql instance for shits and giggles, no effect. Some day I’ll actually dig in the logs.


  • Yeah that was the tough pill to swallow, moving away from folder based (the old *sonic gang) to tag based navidrome. Not for everyone, but getting your tags in order opens up some nice doors.

    They publish a container image as part of their releases, and you can manage everything with environment variables. If you’re used to running containers I’d say this is even easier for testing and playing around.




  • Maybe you’ve tried it already, but navidrome is a great purpose built music streamer. I was using subsonic back in the day, then airsonic, then airsonic advanced. When I first got on navidrome it was a tough pill to swallow since I never maintained my tags, but I gave a little time here and there to comb through it and in the end it feels like a worthwhile investment. It paid off a little bit more when I adopted lyrion music server and squeeze players for local playback around the home since this organizes by the same tags (mostly), so the whole library is kind of plug and play with things that honor the same tags.


  • It’s not going to randomly disappear your data, but I don’t particularly trust it either. As with anything, keep to a back up strategy. As far as efficiency goes, if you bear in mind it is still a VM but with most of the configuration hidden away for a simpler experience, I would say it is more convenient than a VM under virtualbox or vmware player, especially if you have no need for a full linux desktop environment.





  • Yeah I don’t think anyone sane would disagree. That’s what forced the decision for me, to expose or not. I was not going to try talking anyone through VPN setup, so exposure + whatever hardening practice could be applied. I wouldn’t really advocate for this route, but I like hearing from others doing it because sometimes a useful bit of info or shared experience pops up. The folder path explanation is news to me; time to obfuscate the hell out of that.



  • My automated workflow is to package up backup sources into tars (uncompressed), and encrypt with gpg, then ship the tar.gpg off to backblaze b2 and S3 with rclone. I don’t trust cloud providers so I use two just in case. I’ve not really been in the need for full system backups going off site, rather just the things I’d be severely hurting for if my home exploded.

    But to your main questions, I like gpg because you have good options for encrypting things safely within bash/ash/sh scripting, and the encryption itself is considered strong.

    And, I really like rclone because it covers the main cloud providers and wrangles everything down to an rsync-like experience which also pretty tidy for shell scripting.