51
29
submitted 1 week ago* (last edited 1 week ago) by mike_wooskey@lemmy.thewooskeys.com to c/selfhosted@lemmy.world

I host a website that uses mTLS for authentication. I created a client cert and installed it in Firefox on Linux, and when I visit the site for the first time, Firefox asks me to choose my cert and then I'm able to visit the site (and every subsequent visit to the site is successful without having to select the cert each time). This is all good.

But when I install that client cert into GrapheneOS (settings -> encryption & credentials -> install a certificate -> vpn & app user certificate), no browser app seems to recognize that it exists at all. Visiting the website from Vanadium, Fennec, or Mull browsers all return "ERR_BAD_SSL_CLIENT_AUTH_CERT" errors.

Does anyone have experience successfully using an mTLS cert in GrapheneOS?

[SOLVED] Thanks for the solution, @Evkob@lemmy.ca

52
29

I've been hosting Alexandrite as my main web UI for Lemmy because Lemmy's own UI is a bit too basic for my tastes, but Alexandrite hasn't been updated in 7 months and is still missing features like setting a default comment sort type. Can anyone recommend an alternative with a similar look and feel? I use the "list" view on smaller resolutions and the "cards" view on my ultrawide

53
20
submitted 1 week ago* (last edited 1 week ago) by tomsh@lemmy.world to c/selfhosted@lemmy.world

Hello,

I have a Nextcloud server installed at home that works well on my LAN network, but when I try to make the server accessible via a DynDNS service, I cannot connect to it. The request doesn't even reach my server. My question is whether the router immediately blocks the request, because when I set the router to be accessible (it has separately that option), I can connect without any issues over dyndns url. Could my ISP (O2) be blocking it? I can confirm that it's not a firewall issue, and it's also not because I'm connected to the same WiFi as the server. It's not a port forwarding issue either, as I've gone through all possible options. My router is a Fritzbox 6660, and there are no logs indicating that a request has even come through.

My second question is whether this is even allowed in Germany? Also, I've noticed that my ISP rarely changes my IP address; in fact, I haven't seen it change at all in the past few months, which is strange because in my home country, it changed every 24 hours.

Edit: First, thank you all for your help. I will try your suggestions over the course of this week or month (due to time-related issues :) and will report back with the results. Since I am clearly a noob when it comes to self-hosting and I plan to have only a Nextcloud server for personal use, what is the best way to secure the system in these situations and allow only certain devices to access it over the external network? (if I ever manage to access it at all)

54
86

So I have a retired but still very serviceable PC that I intend to use as my first home server. I gave two basic goals in self-hosting:

  1. Host family media through Jellyfin, etc. This would include tv, music, and possibly books as well. Many of these will be managed through the Arr apps.
  2. Degoogle my phone - I'm beginning by replacing Photos with Immich, but hope to also use Home Assistant, backup other phone data such as messages media, shopping lists, etc. I hope to replace Google storage/backup with Proton Drive.

So the question is what OS should I set up to run that? My proof of concept was an immich container running in xubuntu on an old laptop. I chose Xubuntu because I like the availability of documentation and community support for Ubuntu like distros, but wanted a lower powered alternative for the older device.

It seems to be working well, but I've had a few hiccups trying to update it, and I've heard that once you get into it, Linux distros like Ubuntu are not very user friendly for self-hosting as a beginner.

So is it better on the whole for a beginner to have a popular distro with lots if documentation and step by step guides, or to have a purpose-built OS like TrueNAS that might be more straightforward, but with less support?

55
96
submitted 1 week ago by abobla@lemm.ee to c/selfhosted@lemmy.world
56
17

I'm a beginner in networking things but due to my ISP I can only open a certain range of ports in my router to be accessible from the outside of my network (something like ports 11000-11500).

That means I can't open port 443 to access my reverse proxy from the outside. Is it possible to redirect all traffic that's coming from one of the ports in the range to port 443 of my server?

I haven't found that possibility in my router (Fritzbox 7530) so is there a way to do this on my server (running Fedora Server)?

57
52

I’ve been doing POSSE for a while now and it had helped me immensely by saving time and stress.

Basically every time I post something on a 3rd party site I store the content locally. Currently only in Obsidian and some locally cached videos and articles (TubeArchivist and Raindrop)

When I get dragged to the same argument or topic again, I can just grab my old comment, maybe edit/update it a bit and post it.

For some stuff I have longer blog posts I can link to, for some they are images and graphs.

58
26
submitted 1 week ago by tarius@lemmy.ml to c/selfhosted@lemmy.world

Disclaimer: This is for folks who are running services on Windows machines and does not have more than one device. I am neither an expert at self hosting nor PowerShell. I curated most of this code by doing a lot of "Google-ing" and testing over the years. Feel free to correct any mistakes I have in the code.

Background

TLDR: Windows user needs an uptime monitoring solution

Whenever I searched for uptime monitoring apps, most of the ones that showed up were either hosted on Linux or containers and all I wanted was a a simple exe installation file for some app that will send me alerts when a service or the computer was down. Unfortunately, I couldn't find anything. If you know one, feel free to recommend them.

To get uptime monitoring on Windows, I had to turn to scripting along with a hosted solution (because you shouldn't host the monitoring service on the same device as where your apps are running in case the machine goes down). I searched and tested a lot of code to finally end up with the following.

Now, I have services running on both Windows and Linux and I use Uptime Kuma and the following code for monitoring. But, for people who are still on Windows and haven't made the jump to Linux/containers, you could use these scripts to monitor your services with the same device.

Solution

TLDR: A PowerShell script would check the services/processes/URLs/ports and ping the hosted solution to send out notification.

What I came up with is a PowerShell script that would run every 5 minutes (your preference) using Windows Task Scheduler to check if a Service/Process/URL/Port is up or down and send a ping to Healthchecks.io accordingly.

Prereqs

  1. Sign up on healthchecks.io and create a project

  2. Add integration to your favorite notification method (There are several options; I use Telegram)

  3. Add a Check on Healthchecks.io for each of the service you want to monitor. Ex: Radarr, Bazarr, Jellyfin

    When creating the check, make sure to remember the Slug you used (custom or autogenerated) for that service.

  4. Install latest version of PowerShell 7

  5. Create a PowerShell file in your desired location. Ex: healthcheck.ps1 in the C drive

  6. Go to project settings on Healthchecks.io, get the Ping key, and assign it to a variable in the script

    Ex: $HC= "https://hc-ping.com/<YOUR_PING_KEY>/"

    The Ping key is used for pinging Healthchecks.io based on the status of the service.

Code

  1. There are two ways you can write the code: Either check one service or loop through a list.

Port

  1. To monitor a list of ports, we need to add them to the Services.csv file.

    The names of the services need to match the Slug you created earlier because, Healthchecks.io uses that to figure out which Check to ping.

Ex:

"Service", "Port"
"qbittorrent", "5656"
"radarr", "7878"
"sonarr", "8989"
"prowlarr", "9696"
  1. Then copy the following code to healthcheck.ps1:
Import-CSV C:\Services.csv | foreach{
    Write-Output ""
    Write-Output $($_.Service)
    Write-Output "------------------------"
    $RESPONSE = Test-Connection localhost -TcpPort $($_.Port)
    if ($RESPONSE -eq "True") {
        Write-Host "$($_.Service) is running"
        curl $HC$($_.Service)
    } else {
        Write-Host "$($_.Service) is not running"
        curl $HC$($_.Service)/fail
    }
}

The script looks through the Services.csv file (Line 1) and check if each of those ports are listening ($($_.Port) on Line 5) and pings Healthchecks.io (Line 8 or 11) based on their status with their appropriate name ($($_.Service)). If the port is not listening, it will ping the URL with a trailing /fail (Line 11) to indicate it is down.

Service

  1. The following code is to check if a service is running.

    You can add more services on line 1 in comma separated values. Ex: @("bazarr","flaresolverr")

    This also needs to match the Slug.

$SERVICES = @("bazarr")
foreach($SERVICE in $SERVICES) {
    Write-Output ""
    Write-Output $SERVICE
    Write-Output "------------------------"
    $RESPONSE = Get-Service $SERVICE | Select-Object Status
    if ($RESPONSE.Status -eq "Running") {
        Write-Host "$SERVICE is running"
        curl $HC$SERVICE
    } else {
        Write-Host "$SERVICE is not running"
        curl $HC$SERVICE/fail
    }
}

The script looks through the list of services (Line 1) and check if each of those are running (Line 6) and pings Healthchecks.io based on their status.

Process

  1. The following code is to check if a process is running.

    Line 1 needs to match their Slug

$PROCESSES = @("tautulli","jellyfin")
foreach($PROCESS in $PROCESSES) {
	Write-Output ""
	Write-Output $PROCESS
	Write-Output "------------------------"	
	$RESPONSE = Get-Process -Name $PROCESS -ErrorAction SilentlyContinue
	if ($RESPONSE -eq $null) {
		# Write-Host "$PROCESS is not running"
		curl $HC$PROCESS/fail
	} else {
		# Write-Host "$PROCESS is running"
		curl $HC$PROCESS
	}
}

URL

  1. This can be used to check if a URL is responding.

    Line 1 needs to match the Slug

$WEBSVC = "google"
$GOOGLE = "https://google.com"
Write-Output ""
Write-Output $WEBSVC
Write-Output "------------------------"
$RESPONSE = Invoke-WebRequest -URI $GOOGLE -SkipCertificateCheck
if ($RESPONSE.StatusCode -eq 200) {
    # Write-Host "$WEBSVC is running"
    curl $HC$WEBSVC
} else {
    # Write-Host "$WEBSVC is not running"
    curl $HC$WEBSVC/fail
}

Ping other machines

  1. If you have more than one machine and you want to check their status with the Windows host, you can check it by pinging them

  2. Here also I use a CSV file to list the machines. Make sure the server names matches their Slug

    Ex:

    "Server", "IP"
    "server2", "192.168.0.202"
    "server3", "192.168.0.203"
    
Import-CSV C:\Servers.csv | foreach{
    Write-Output ""
    Write-Output $($_.Server)
    Write-Output "------------------------"
    $RESPONSE = Test-Connection $($_.IP) -Count 1 | Select-Object Status
    if ($RESPONSE.Status -eq "Success") {
        # Write-Host "$($_.Server) is running"
        curl $HC$($_.Server)
    } else {
        # Write-Host "$($_.Server) is not running"
        curl $HC$($_.Server)/fail
    }
}

Task Scheduler

For the script to execute in intervals, you need to create a scheduled task.

  1. Open Task Scheduler, navigate to the Library, and click on Create Task on the right
  2. Give it a name. Ex: Healthcheck
    1. Choose Run whether user is logged on or not
    2. Choose Hidden if needed
  3. On Triggers tab, click on New
    1. Choose On a schedule
    2. Choose One time and select an older date than your current date
    3. Select Repeat task every and choose the desired time and duration. Ex: 5 minutes indefinitely
    4. Select Enabled
  4. On Actions tab, click on New
    1. Choose Start a program
    2. Add the path to PowerShell 7 in Program: "C:\Program Files\PowerShell\7\pwsh.exe"
    3. Point to the script in arguments: -windowstyle hidden -NoProfile -NoLogo -NonInteractive -ExecutionPolicy Bypass -File C:\healthcheck.ps1
  5. Rest of the tabs, you can choose whatever is appropriate for you.
  6. Hit Ok/Apply and exit

Notification Method

Depending on the integration you chose, set it up using the Healthchecks docs.

I am using Telegram with the following configuration:

Name: Telegram
Execute on "down" events: POST https://api.telegram.org/bot<ID>/sendMessage
Request Body:
```
{
    "chat_id": "<CHAT ID>",
    "text": "🔴 $NAME is DOWN",
    "parse_mode": "HTML",
    "no_webpage": true
}
```
Request Headers: Content-Type: application/json
Execute on "up" events: POST https://api.telegram.org/bot<ID>/sendMessage
Request Body:
```
{
"chat_id": "<CHAT ID>",
"text": "🟢 $NAME is UP",
"parse_mode": "HTML",
"no_webpage": true
}
```
Request Headers: Content-Type: application/json

Closing

You can monitor up to 20 services for free. You can also selfhost Healthchecks instance (wouldn't recommend if you only have one machine).

I've been wanting to give something back to the community for a while. I hope this is useful to some of you. Please let me know if you have any questions or suggestions. Thank you for reading!

59
5

I have my own invidious instance, and i want all the new videos from my subscriptions to automatically get added to a playlist. Anyone know how do do this?

60
11

cross-posted from: https://discuss.tchncs.de/post/21001865

I just installed Piped using podman-compose but when open up the frontend in my browser, the trending page is just showing the loading icon. The logs aren't really helping, the only error is in piped-backend:

java.net.SocketTimeoutException: timeout
	at okhttp3.internal.http2.Http2Stream$StreamTimeout.newTimeoutException(Http2Stream.kt:675)
	at okhttp3.internal.http2.Http2Stream$StreamTimeout.exitAndThrowIfTimedOut(Http2Stream.kt:684)
	at okhttp3.internal.http2.Http2Stream.takeHeaders(Http2Stream.kt:143)
	at okhttp3.internal.http2.Http2ExchangeCodec.readResponseHeaders(Http2ExchangeCodec.kt:97)
	at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.kt:110)
	at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.kt:93)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:34)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:95)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:83)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:76)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:109)
	at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:201)
	at okhttp3.internal.connection.RealCall.execute(RealCall.kt:154)
	at me.kavin.piped.utils.RequestUtils.getJsonNode(RequestUtils.java:34)
	at me.kavin.piped.utils.matrix.SyncRunner.run(SyncRunner.java:97)
	at java.base/java.lang.VirtualThread.run(VirtualThread.java:329)

Would appreciate it if anyone could help me. I also wasn't sure what info to include, so please ask if there's any more info you need.

61
71

While reading many of the blogs and posts here about self hosting, I notice that self hosters spend a lot of time searching for and migrating between VPS or backup hosting. Being a cheapskate, I have a raspberry pi with a large disk attached and leave it at a relative's house. I'll rsync my backup drive to it nightly. The problem is when something happens, I have to walk them through a reboot or do troubleshooting over the phone or worse, wait until a holiday when we all meet.

What would a solution look like for a bunch of random tech nerds who happen to live near each other to cross host each other's offsite backups? How would you secure it, support it or make it resilient to bad actors? Do you think it could work? What are the drawbacks?

62
23
submitted 2 weeks ago by zako@lemmy.world to c/selfhosted@lemmy.world

I feel that sometimes resolution of sub-domain.duckdns.org and host.sub-domain.duckdns.org fails with empty result or even timeout result. Tested resolution against Google and Cloudfare DNS servers.

Do you have a similar behaviour?

63
13
submitted 2 weeks ago* (last edited 2 weeks ago) by Sandbag@lemm.ee to c/selfhosted@lemmy.world

I have a spare 3070 GPU, as well as 16GB of Memory and my friend has a spare PSU, this part list has everything else I would need+everything I already have. Is there anything I should tweak or modify or will this build work, I plan to use it as a headless server.

Thanks for the feedback!

https://pcpartpicker.com/list/2fJJYN

Update:

Use case, I currently run a docker swarm cluster with two older Optiplexes and a raspberry Pi, like I said before, I have a spare PSU, GPU and Memory and would rather put it to work then sell it. I would like to add this new PC to my cluster and utilize it for my home services and also learning. The only items I would really be buying is the case, cpu and board. I would like to run some local AI models on this PC as well.

64
26

Well I set up my email server thru cloudflare and managed to receive emails directly to my basement server. I could live with this and the various security threats incoming thru my unifi. But one thing is for sure, my wife won't have any of it. She's a total backwards thinking give me windows or I'll jump kind of Gal. So I found that I could run a dockerized Thunderbird instance and I thought ... Wow! I can just login to it from my computer or my phone, Surely this is it! I can have emails backed up from Gmail to my server and just access my server! And you know what? It works! I can access my Gmail on my browser! It's beautiful!.... But then I login through my phone and wow! I can access my Gmail! Thru my phone! Except the interface is the same as my desktop. It's literally a VNC to the server. I can login to it on my desktop and watch the mouse move as I move my finger on my phone! Great party trick, but....the text is microscopic. So is there another way to get IMAP and SMTP interface to Gmail, archiving all emails on my own server? I literally don't want any of my emails to live on a Gmail server, but I want to be able to send receive and search emails I previously passed through Gmail but now live on my server.

65
41
submitted 2 weeks ago* (last edited 2 weeks ago) by Shimitar@feddit.it to c/selfhosted@lemmy.world

Hi fellow hosters!

I do selfhost lots of stuff, starting from the classical '*Arrs all the way to SilberBullet and photos services.

I even have two ISPs at home to manage failover in case one goes down, in fact I do rely on my home services a lot specially when I am not at home.

The main server is a powerful but older laptop to which i have recently replaced the battery because of its age, but my storage is composed of two raid arrays, which are of course external jbods, and with external power supplies.

A few years ago I purchased a cheap UPS, basically this one: EPYC® TETRYS - UPS https://amzn.eu/d/iTYYNsc

Which works just fine and can sustain the two raids for long enough until any small power outage is gone.

The downside is that the battery itself degrades quickly and every one or two years top it needs to be replaced, which is not only a cost but also an inconvenience because i usually find out always the worst possible time (power outage), of course!

How do you tackle the issue in your setups?

I need to mention that I live in the countryside. Power outages are like once or twice per year, so not big deal, just annoying.

66
26
Proxmox rebuild (programming.dev)

Greetings fellow enthusiasts.

I'm going to rebuild my proxmox server and would like to have a few opinions.

First thing is I use my server as a NAS and then run VMs off that.

I have 2 x 20tb in ZFS mirror but I'm planning on changing that to 3 x 24tb in ZFS1.

I currently have a ZFS pool in proxmox and then add that pool to Open Media Vault.

Issue is, if my OMV breaks and I'll have to create another VM, I'm pretty sure all that data would become inaccessible to my OMV.

I've heard of people creating a NFS in proxmox and then passing it through to OMV?

Or should I get HMB cards and then just pass it through the VM and then just run it natively within OMV. I'd need to install the ZFS kernal into OMV as well.

Would like to hear some options and tips.

67
170

I assume most users here have some sort of tech/IT/software background. However, I've seen some comments of people who might not have that background (no problem with that) and I wonder if you are self-hosting anything, how did you decide that you would like to self-host?

68
69

In a few months, I will have the space and infrastructure to join the selfhost community. I'm trying to prepare, as I know it can be challenging, but I somehow ended up with more questions than answers.

For context, I want to run a server with torrents, media (plex, Jellyfin or something else entirely - I didn't make a decision yet), photos(Emmich, if its stable, or something else), Rook, Paperless, Home Assistant, Frigate, Adguard Home... Possibly lots more. Also, I will need storage - I'm planning for 3x18tb drives to begin with, but will certainly be adding more later.

My initial intention was to set up a NAS in Silverstone CS382(or Jonsbo N3/N5, if they're in a reasonable price). I heard good things about Unraid and it's capabilities of running docker. On the other hand, I'm hearing hood things about Proxmox or NixOS with NAS software running in a VM, too - but for Unraid, it seems hacky. Maybe I should run NAS and a separate server? That'd be more costly and seems like more work on maintenance with no real benefit. Maybe I should go with TrueNAS in a VM? If I don't do anything other than NAS, TrueNAS shouldn't be that hard to set up, right?

I'm also wondering whether I should go with Intel for QuickSync, AMD and Arc graphics or something else entirely. I've read that AV1 is getting popular, is AMD getting more support there? I will buy Intel if it's clearly the better option, but I'm team Red and would prefer AMD.

Also, could anyone with a non-technical SO tell me how do they find your selhosted things? I've read about Cloudflare Tunnels and Tailscale, which will be a breeze for me, but I gotta think about other users aswell.

That's another concern for me - am I correct in thinking Tailscale and Cloudflare Tunnels are all I need to access the server remotely? I will probably set up a PiKVM or the Risc one aswell, can it be exposed aswell? I will have a dream machine from Ubiqiti, anything that needs to run to access the server I may run there. I'm not looking to set up anything more complicated like Wireguard - it's too much.

For additional context, I'm a software developer, I know my way with Docker and the command line and I consider myself to be tech savvy, but I'm not looking to spend every weekend reading changelogs and doing manual updates. I want to have an upgrade path (that's why Im not going with Synology for example), but I also don't want to obsess over it. Money isn't much of an issue, I can spare 1-2k$ on the build, not including the drives.

Any feedback and suggestions appreciated :)

69
60
70
101
71
19

inspired by this post

I have aac mini with an infared reciever on it. I'd love to use it as a TV PC. And ideally an infared remote too.

I am looking for software recommendations for this, as I've done basically no research.

What's my best option? Linux with kodi? How would a remote connect / which software is required for the remote to work??

Thanks!

72
24

Does anyone know of a hosting service that offers Silverblue as a possible choice for OS?

It seems to me that for a server running only docker services the greatly reduced attack surface of an immutable distro presents a definitive advantage.

73
65

Hi all,

I found a hobby in trying to secure my Linux server, maybe even beyond reasonable means.

Currently, my system is heavily locked down with user permissions. Every file has a group owner, and every server application has its own user. Each user will only have access to files it is explicitly added to.

My server is only accessible from LAN or VPN (though I've been interested in hosting publicly accessible stuff). I have TLS certs for most everything they can use it (albeit they're self signed certs, which some people don't like), and ssh is only via ssh keys that are passphrase protected.

What are some suggestions for things I can do to further improve my security? It doesn't have to be super useful, as this is also fun for me.

Some things in mind:

  • 2 factor auth for SSH (and maybe all shell sessions if I can)
  • look into firejail, nsjail, etc.
  • look into access control lists
  • network namespace and vlan to prevent server applications from accessing the internal network when they don't need to
  • considering containerization, but so far, I find it not worth foregoing the benefits I get of a single package manager for the entire server

Other questions:

  • Is there a way for me to be "notified" if shell access of any form is gained by someone? Or somehow block all shell access that is not 2FA'd?
  • my system currently secures files on the device. But all applications can see all process PIDs. Do I need to protect against this?

threat model

  • attacker gains shell access
  • attacker influences server application to perform unauthorized actions
  • not in my threat model: physical access
74
95

With Chromecasts being discontinued, increase in ads, telemetry, etc I'm wondering if anyone else is going back to old school HTPCs or if they have some other solution to do this in house.

I think the options here are likely:

  1. Rooted streamer (ie Chromecast, firestick)
  2. Android Box
  3. Mini PC

I'm actually most interested in experimenting with #3, a mini PC running KDE Plasma Bigscreen. Most of my self hosted apps can be run in browser windows, and a full desktop (while harder to navigate) is better than the browsers you can get on Android.

What is everyone esle, especially the privacy / de-googled self hosters doing for their media front end?

75
29

I'm finally taking the leap from upgrading from a media drive sitting in my desktop PC to a self-build NAS. The parts are on their way and I have to figure out what to do when they actually arrive.

Current setup: Desktop PC with a single 20TB media drive (zfs, 15TB in use)

My knowledge: I use Linux as my daily driver, but I'm far from a power user. I can figure out and fix problems with online resources or the kind help of others like you

The goal: I want to move to a small NAS (2 additional 20TB drives are on their way). The system will have 32GB of DDR5 RAM. 1 disk parity for 40TB of usable storage

What will I use it for:

  • Backup for Desktop PC
  • Media server (Jellyfin)
  • Arr stack
  • (other small services int he future?)

My questions:

  1. What OS should I use? The obvious answers being Unraid or TrueNAS. The 40TB of storage (1 disk parity) will likely be enough for a couple of years. So adding additional drives is not planned for some time.

  2. How can I import the data from my current drive to the NAS? I am very new to the topic and my initial searches were not that helpful. With Unraid I should just be able to setup the first two disks and import the data from the other. I am unsure how to accomplish that with TrueNAS.

Some advice and tips would be great. Feel free to ask for more details if I forgot some crucial info.

Thanks for reading!

view more: ‹ prev next ›

Selfhosted

39206 readers
611 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS