sorted by: new top controversial old
[-] TheHobbyist@lemmy.zip 83 points 3 days ago

I'm with you all the way, really, except that, truly, KDE plasma and dark mode are the superior choices, obviously :)

[-] TheHobbyist@lemmy.zip 25 points 1 week ago

And the Netherlands are 6th! But the hardest part will be reaching that Million threshold... We still have a lot of time, but the pace has certainly slowed down the last few weeks compared to the skyrocketing in the early days. I think we will need to have more awareness spread around the campaign, perhaps try to reach mainstream media in some ways...

[-] TheHobbyist@lemmy.zip 8 points 1 week ago

infomaniak is the largest swiss cloud provider, they have multiple services which are domain related (purchase and management), cloud computing and more. They have a good reputation. They also have a swiss cloud certificated meaning they are able to host data in Switzerland and manage it from Switzerland. If you trust Switzerland for privacy, I think by extension you can trust them.

[-] TheHobbyist@lemmy.zip 52 points 1 week ago

We had captchas to solve that a while ago. Turns out, some people are willing to be paid a miserable salary to solve the captchas for bots. How would this be different? The fact of being a human becomes a monetizable service which can just be rented out for automated systems. No "personhood" check can prevent this.

[-] TheHobbyist@lemmy.zip 13 points 1 week ago* (last edited 1 week ago)

The age of DRM means that they can now "unlaunch" the game and force you into a reimbursement while giving up the game. Why? What if someone liked it and wanted to keep playing? is this an online only game? This is just sad.

edit: this is a good time to remind people, if you live in the EU, please support the "Stop Killing Games" initiative, it has just past a third of the required signatures, and has 10 months to go still:

https://eci.ec.europa.eu/045/public/#/screen/home

[-] TheHobbyist@lemmy.zip 5 points 1 week ago

In the deep learning community, I know of someone using parquet for the dataset and annotations. It allows you to select which data you want to retrieve from the dataset and stream only those, and nothing else. It is a rather effective method for that if you have many different annotations for different use cases and want to be able to select only the ones you need for your application.

[-] TheHobbyist@lemmy.zip 3 points 3 weeks ago

Loved this game and the story. The second half of the game is where it really become interesting! I remember that mission in the screenshot :)

[-] TheHobbyist@lemmy.zip 6 points 1 month ago

Exactly. The Carter center already witnessed the previous election and was unable to determine Maduro as winner. The elections already happened, the people already voted. Why would a new election be excepted to yield any other result than what happened?

[-] TheHobbyist@lemmy.zip 1 points 1 month ago

Thanks for the follow up. I wish I could afford multiple TB of nvmes but that is unfortunately out of my budget, but it would definitely be better for latency, notice and power draw. This time I will have to stick to HDDs, but I'll keep looking :) Enjoy your setup!

[-] TheHobbyist@lemmy.zip 1 points 1 month ago

Thanks! I love that case, that's what I use for my main server. In this case I was interested in a prebuilt, which may be easier to find and with all main components included and thus possibly cheaper. I updated my main point as I understand it may not have been obvious.

[-] TheHobbyist@lemmy.zip 1 points 1 month ago* (last edited 1 month ago)

Thank you for taking the time to answer. Indeed, the title is a simplification, but I was hoping that the body of the text would highlight that it does not have to be a literal SFF but just something on the smaller side.

6
submitted 1 month ago* (last edited 1 month ago) by TheHobbyist@lemmy.zip to c/selfhost@lemmy.ml

Hi folks, I'm considering setting up an offsite backup server and am seeking recommendations for a smallish form factor PC. Mainly, are there some suitable popular second hand PCs which meet the following requirements:

  • fits 4x 3.5" HDD
  • Smaller than a regular tower (e.g. mATX or ITX)
  • Equipped with a 6th of 7th gen Intel CPU at least (for power efficiency and transcoding, in case I want it to actually to some transcoding) with video output.
  • Ideally with upgradeable RAM

Do you know of something which meets those specs and is rather common on the second hand market?

Thanks!

Edit: I'm looking for a prebuilt system, such as a dell optiplex or similar.

[-] TheHobbyist@lemmy.zip 59 points 1 month ago

This is going very well it seems! I see the next few countries close to passing the threshold are:

  • Denmark (88%)
  • Netherlands (87%)
  • Germany (75%)

Assuming we get those, we would need one more country. The highest remaining country is Ireland (55%). Getting all those still wouldn't reach 1M signatures, but the rest could keep being distributed across the EU (even including countries which have already passed the threshold, I'm assuming).

This is all very exciting and gives me a lot of hope! Keep signing folks!

42
submitted 2 months ago* (last edited 2 months ago) by TheHobbyist@lemmy.zip to c/foss@beehaw.org

Yesterday, there was a live scheduled by Louis Grossman, titled "Addressing futo license drama! Let's see if I get fired...". I was unable to watch it live, but now the stream seems to be gone from YouTube.

Did it air and was later removed? Or did it never happen in the first place?

Here's the link to where it was meant to happen: https://www.youtube.com/watch?v=HTBYMobWQzk

Cheers

Edit: a new video was recently posted at the following link: https://www.youtube.com/watch?v=lCjy2CHP7zU

I do not know if this was the supposedly edited and reuploaded video or if this is unrelated.

8
submitted 5 months ago by TheHobbyist@lemmy.zip to c/homelab@lemmy.ml

Hi folks,

I seem to be having some internet connectivity issues lately and I would like to monitor my access to the internet. I have a homelab and was wondering whether someone had perhaps something like a docker container which pings a custom website every so often and plots a timescale of when the connection was successful and when it was not.

Or perhaps you have another suggestion? I know of dashboards like grafana but I don't know whether they can be configured to actually generate that data or whether they rely on a third party to feed them. Thanks!

8

Hi folks, I'm looking for a specific YouTube video which I watched around 5 months ago.

The gist of the video is that it was comparing the transcoding performance of an Intel iGPU when used natively, compared to when passed through to a VM. From what I recall there was a significant performance hit and it was around 50% or so (in terms of fps transcoding). I believe the test was performed on jellyfin. I don't remember whether it was using xcpng, proxmox or another OS. I don't remember which channel published this video nor when it was published, just that I watched it sometime between April and June this year.

Anyone recall or know what video I'm talking about? Possible keywords include: quicksync, passthrough, sriov, iommu, transcoding, iGPU, encoding.

Thank you in advance!

18
submitted 1 year ago* (last edited 1 year ago) by TheHobbyist@lemmy.zip to c/selfhosted@lemmy.world

Hi y'all,

I am exploring TrueNAS and configuring some ZFS datasets. As ZFS provides with some parameters to fine-tune its setup to the type of data, I was thinking it would be good to take advantage of it. So I'm here with the simple task of choosing the appropriate "record size".

Initially I thought, well this is simple, the dataset is meant to store videos, movies, tv shows for a jellyfin docker container, so in general large files and a record size of 1M sounds like a good idea (as suggested in Jim Salter's cheatsheet).

Out of curiosity, I ran Wendell's magic command from level1 tech to get a sense for the file size distribution:

find . -type f -print0 | xargs -0 ls -l | awk '{ n=int(log($5)/log(2)); if (n<10) { n=10; } size[n]++ } END { for (i in size) printf("%d %d\n", 2^i, size[i]) }' | sort -n | awk 'function human(x) { x[1]/=1024; if (x[1]>=1024) { x[2]++; human(x) } } { a[1]=$1; a[2]=0; human(a); printf("%3d%s: %6d\n", a[1],substr("kMGTEPYZ",a[2]+1,1),$2) }'

Turns out, that's when I discovered it was not as simple. The directory is obviously filled with videos, but also tiny small files, for subtitiles, NFOs, and small illustration images, valuable for Jellyfin's media organization.

That's where I'm at. The way I see it, there are several options:

    1. Let's not overcomplicate it, just run with the default 64K ZFS dataset recordsize and roll with it. It won't be such a big deal.
    1. Let's try to be clever about it, make 2 datasets, one with a recordsize of 4K for the small files and one with a recordsize of 1M for the videos, then select one as the "main" dataset and use symbolic links for each file to the other dataset such that all content is "visible" from within one file structure. I haven't dug too much in how I would automate it, but might not play nicely with the *arr suite? Perhaps overly complicated...
    1. Make all video files MKV files, embed the subtitles, rename the videos to make NFOs as unnecessary as possible for movies and tv shows (though this will still be useful for private videos, or YT downloads etc)
    1. Other?

So what do you think? And also, how have your personally set it up? Would love to get some feedback, especially if you are also using ZFS and have a videos library with a dedicated dataset. Thanks!

Edit: Alright, so I found the following post by Jim Salter which goes through more detail regarding record size. It clarifies my misconception about recordsize not being the same as the block size, but also it can easily be changed at any time. It's just the size of the chunks of data to be read. So I'll be sticking to 1M recordsize and leave it at that despite having multiple smaller files, because the important will be to effectively stream the larger files. Thank you all!

view more: next ›

TheHobbyist

joined 1 year ago