Personally I would have some sort of notice regarding these on affected projects, but I don't think it's enough to warrant slapping an anti-feature flag on them just because of the author's choice of code respoitory hosting provider or CDN.
I'm not sure how using a VPN would help in this situation if you are concerned about having your YouTube account banned? Would you being using that VPN while signed out and with cookies/site data cleared?
It really depends on the subreddits you use. I was on reddit for almost 10 years and while I saw others complaining about power-trippers I never experienced it myself and that's after I've used several bigger subreddits.
As fun as it is to dunk on reddit and its moderation, this is definitely exaggerated lol
Even after Roblox dropped the Metaverse label they are using, they still want to be Second Life with dedicated games attached.
For mp3 sure, but for opus standards 160kbps is great. I read that 128kbps is generally considered the most you need but 160kbps smooths over any artifacts, assuming the source file doesn't have them.
Steamwork Development post regarding this: https://steamcommunity.com/groups/steamworks/announcements/detail/3684558162504860651
You can download audio from YouTube as 160kbps opus files, which aren't lossless sure but it's the highest quality you can get from YouTube if alternate means aren't an option.
While you were using the subreddits you were subscribed to, the general default subreddits were always seeing activity like this.
But over time reddit has been attracting a far more general audience of regular people from other social media platforms.
I get that searching can be a bit finicky sometimes but doesn't typing in a full username of a user you want to search for usually do the job?
That part about shutting down is something that https://joinmastodon.org/covenant tries to help with, where advance notice should be given and multiple people should have access to administrative actions. At least if the server has to shut down the users are given enough time to look at another server.
I tried finding information on what indexer they are using. Are they using their own?
Edit: says this in the readme:
The commoncrawl organization for crawling the web and making the dataset readily available. Even though we have our own crawler now, commoncrawl has been a huge help in the early stages of development.