sorted by: new top controversial old
[-] archomrade@midwest.social 1 points 17 hours ago

I use this for architecture and it's saved me so much time

[-] archomrade@midwest.social 2 points 2 days ago

Who are you responding to, bud?

[-] archomrade@midwest.social 4 points 2 days ago

Meanwhile billionaires are still laughing at the poors fighting each other thinking one job is better than other or villifying entire professions still.

Landlording isn't a job.

[-] archomrade@midwest.social 4 points 1 week ago

Browsing their coms can be a pretty unique experience, especially if you go in with a preformed idea of what their communities are like. There's a huge spread of interests and experiences, and sometimes you can be browsing a niche community and forget that these were the people posting BPB on lemmy.world threads a year ago.

Knowing the academic writings and history they're referencing helps a lot with understanding where they are coming from, even if you may not agree with all of it.

[-] archomrade@midwest.social 1 points 1 week ago

I guess they are speculating that he did? Apparently they found a folder on his computer with pictures of her in underwear that she didn't recognize or remember taking, but she obviously doesn't have any memory of being a part of the sexual abuse since he was allegedly drugging them.

I didn't read the story my wife did, she was just conveying it to me so I might have misunderstood something

[-] archomrade@midwest.social 24 points 1 week ago

This is the most reasonable response.

A lot of people here have long since made up their mind about hexbear based both on repeated meta posting on the topic and possibly a bad experience or two with them on a topic they assumed was uncontested but is a landmine topic for communists of a particular bent

I've personally never had a bad experience with hexbears, possibly because I'm more empathetic to their perspective, but more likely because I know when it's time to disengage. There are users on lemmy who feel strongly about a certain topic that's abrasive to hexbear users and dig in their heels when jeered at (or maybe feel a personal responsibility to stand them down) and are usually the users here who have the most complaints, because the standard reaction from hexbear users is irreverence (both the users and the mods).

Unlike a lot of liberals coming from reddit, communists often don't have delusions about the neutrality of moderation and so they'll ban you on a whim if they think you're there to stir shit. They use the ban hammer judiciously even with users on their own instance. That's often the biggest complaint both with hexbear and with lemmy.ml.

[-] archomrade@midwest.social 6 points 1 week ago

I wish people would stop comparing those uses of copyright to nonprofits like Internet Archive

While I understand AI training exemptions to copyright are controversial, and think most people here would side with IA on ebook lending.

[-] archomrade@midwest.social 16 points 1 week ago

My wife was telling me about this yesterday, apparently it wasn't just the guy's wife but their fucking daughter, too

He started doing this when she was 60 and it went on for 10 years

The only other thing that I can think of that compares to this is Dahmer lobotomizing his victims with a hand drill to make them into sex slaves

[-] archomrade@midwest.social 1 points 1 week ago

Because Israel has already proven themselves untrustworthy, even if what this story is reporting is credible on its own.

Israel has the full force of American military support against a nation and a people who've been systematically oppressed for 70 years. They bear the responsibility for the outcome of this conflict far more than any other.

[-] archomrade@midwest.social 0 points 1 week ago

How can I leave you out of an analysis that is about something you said? You're just being ridiculous now.

[-] archomrade@midwest.social 0 points 1 week ago

What they claimed was "a whole foods plant-based diet is 30% cheaper."

Which is factually supported by the study, even if you'd prefer to interpret it to mean something else

[-] archomrade@midwest.social 0 points 1 week ago

I'm not trying to make this about you, i'm just trying to respond to what I think you're trying to argue that you didn't explicitly say

10
submitted 3 months ago* (last edited 3 months ago) by archomrade@midwest.social to c/selfhosted@lemmy.world

edit: a working solution is proposed by @Lifebandit666@feddit.uk below:

So you’re trying to get 2 instances of qbt behind the same Gluetun vpn container?

I don’t use Qbt but I certainly have done in the past. Am I correct in remembering that in the gui you can change the port?

If so, maybe what you could do is set up your stack with 1 instance in, go into the GUI and change the port on the service to 8000 or 8081 or whatever.

Map that port in your Gluetun config and leave the default port open for QBT, and add a second instance to the stack with a different name and addresses for the config files.

Restart the stack and have 2 instances.


Has anyone run into issues with docker port collisions when trying to run images behind a bridge network (i think I got those terms right?)?

I'm trying to run the arr stack behind a VPN container (gluetun for those familiar), and I would really like to duplicate a container image within the stack (e.g. a separate download client for different types of downloads). As soon as I set the network_mode to 'service' or 'container', i lose the ability to set the public/internal port of the service, which means any image that doesn't allow setting ports from an environment variable is stuck with whatever the default port is within the application.

Here's an example .yml:

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    environment:
      - VPN_SERVICE_PROVIDER=mullvad
      - VPN_TYPE=[redacted]
      - WIREGUARD_PRIVATE_KEY=[redacted]
      - WIREGUARD_ADDRESSES=[redacted]
      - SERVER_COUNTRIES=[redacted]
    ports:
      - "8080:8080" #qbittorrent
      - "6881:6881"
      - "6881:6881/udp"
      - "9696:9696" # Prowlarr
      - "7878:7878" # Radar
      - "8686:8686" # Lidarr
      - "8989:8989" # Sonarr
    restart: always

  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: "qbittorrent"
    network_mode: "service:gluetun"
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=CST/CDT
      - WEBUI_PORT=8080
    volumes:
      - /docker/appdata/qbittorrent:/config
      - /media/nas_share/data:/data)

Declaring ports in the qbittorrent service raises an error saying you cannot set ports when using the service network mode. Linuxserver.io has a WEBUI_PORT environment variable, but using it without also setting the service ports breaks it (their documentation says this is due to CSRF issues and port mapping, but then why even include it as a variable?)

The only workaround i can think of is doing a local build of the image that needs duplication to allow ports to be configured from the e variables, OR run duplicate gluetun containers for each client which seems dumb and not at all worthwhile.

Has anyone dealt with this before?

37
Leviton ToS Change (midwest.social)

Anyone else get this email from Leviton about their decora light switches and their changes to ToS expressly permitting them to collect and use behavioral data from your devices?

FUCK Leviton, long live Zigbee and Zwave and all open-sourced standards


My Leviton

At Leviton, we’re committed to providing an excellent smart home experience. Today, we wanted to share a few updates to our Privacy Policy and Terms of Service. Below is a quick look at key changes:

We’ve updated our privacy policy to provide more information about how we collect, use, and share certain data, and to add more information about our users’ privacy under various US and Canadian laws. For instance, Leviton works with third-party companies to collect necessary and legal data to utilize with affiliate marketing programs that provide appropriate recommendations. >As well, users can easily withdraw consent at any time by clicking the links below.

The updates take effect March 11th, 2024. Leviton will periodically send information regarding promotions, discounts, new products, and services. If you would like to unsubscribe from communications from Leviton, please click here. If you do not agree with the privacy policy/terms of service, you may request removal of your account by clicking this link.

For additional information or any questions, please contact us at dssupport@leviton.com.

Traduction française de cet email Leviton

Copyright © 2024 Leviton Manufacturing Co., Inc., All rights reserved. 201 North Service Rd. • Melville, NY 11747

Unsubscribe | Manage your email preferences

9

I'm not sure where else to go with this, sorry if this isn't the right place.

I'm currently designing a NAS build around an old CMB-A9SC2 motherboard that is self-described as an 'entry level server board'.

So far i've managed to source all the other necessary parts, but i'm having a hell of a time finding the specified RAM that it takes:

  • 204-pin DDR3 UDIMM ECC

As far as I can tell, that type of ram just doesn't exist... I can find it in SODIMM formats or I can find it in 240-pin formats, but for the life of me I cannot find all of those specifications in a single card.

I'm about ready to just throw the whole board away, but everything else about the board is perfect....

Has anyone else dealt with this kind of memory before? Is there like a special online store where they sell weird RAM components meant for server builds?

40

Pretend your only other hardware is a repurposed HP Prodesk and your budget is bottom-barrel

46
submitted 7 months ago* (last edited 7 months ago) by archomrade@midwest.social to c/linux@lemmy.ml

I'm currently watching the progress of a 4tB rsync file transfer, and i'm curious why the speeds are less than the theoretical read/write maximum speeds of the drives involved with the transfer. I know there's a lot that can effect transfer speeds, so I guess i'm not asking why my transfer itself isn't going faster. I'm more just curious what the bottlenecks could be typically?

Assuming a file transfer between 2 physical drives, and:

  • Both drives are internal SATA III drives with ~~5.0GB/s~~ ~~5.0Gb/s read/write~~ 210Mb/s (this was the mistake: I was reading the sata III protocol speed as the disk speed)
  • files are being transferred using a simple rsync command
  • there are no other processes running

What would be the likely bottlenecks? Could the motherboard/processor likely limit the speed? The available memory? Or the file structure of the files themselves (whether they are fragmented on the volumes or not)?

54
submitted 11 months ago* (last edited 11 months ago) by archomrade@midwest.social to c/linux@lemmy.ml
  • Edit- I set the machine to work last night testing memtester and badblocks (read only) both tests came back clean, so I assumed I was in the clear. Today, wanting to be extra sure, i ran a read-write badblocks test and watched dmesg while it worked. I got the same errors, this time on ata3.00. Given that the memory test came back clean, and smartctl came back clean as well, I can only assume the problem is with the ata module, or somewhere between the CPU and the ata bus. i'll be doing a bios update this morning and then trying again, but seems to me like this machine was a bad purchase. I'll see what options I have with replacement.

  • Edit-2- i retract my last statement. It appears that only one of the drives is still having issues, which is the SSD from the original build. All write interactions with the SSD produce I/O errors (including re-partitioning the drive), while there appear to be no errors reading or writing to the HDD. Still unsure what caused the issue on the HDD. Still conducting testing (running badblocks rw on the HDD, might try seeing if I can reproduce the issue under heavy load). Safe to say the SSD needs repair or to be pitched. I'm curious if the SD got damaged, which would explain why the issue remains after being zeroed out and re-written and why the HDD now seems fine. Or maybe multiple SATA ports have failed now?


I have no idea if this is the forum to ask these types of questions, but it felt a little like a murder mystery that would be a little fun to solve. Please let me know if this type of post is unwelcome and I will immediately take it down and return to lurking.

Background:

I am very new to linux. Last week I purchased a cheap refurbished headless desktop so I could build a home media server, as well as play around with vms and programming projects. This is my first ever exposure to linux, but I consider myself otherwise pretty tech-savvy (dabble in python scripting in my spare time, but not much beyond that).

This week, i finally got around to getting the server software installed and operating (see details of the build below). Plex was successfully pulling from my media storage and streaming with no problems. As i was getting the docker containers up, I started getting "not enough storage" errors for new installs. Tried purging docker a couple times, still couldn't proceed, so I attempted to expand the virtual storage in the VM. Definitely messed this up, and immediately Plex stops working, and no files are visible on the share anymore. To me, it looked as if it attempted taking storage from the SMB share to add to the system files partition. I/O errors on the OMV virtual machine for days.

Take two.

I got a new HDD (so i could keep working as I tried recovery on the SSD). I got everything back up (created a whole new VM for docker and OMV). Gave the docker VM more storage this time (I think i was just reckless with my package downloads anyway), made sure that the SMB share was properly mounted. As I got the download client running (it made a few downloads), I notice the OVM virtual machine redlining on memory from the proxmox window. Thought, (uh oh, i should fix that). Tried taking everything down so I could reboot the OVM with more memory allocation, but the shutdown process hung on the OVM. Made sure all my devices on the network were disconnected, then stopped the VM from the proxmox window.

On OVM reboot, i noticed all kinds of I/O errors on both the virtual boot drive and the mounted SSD. I could still see files in the share on my LAN devices, but any attempt to interact with the folder stalled and would error out.

I powered down all the VM's and now i'm trying to figure out where I went wrong. I'm tempted to just abandon the VM's and just install it all on a Ubuntu OS, but I like the flexibility of having the VM's to spin up new OS's and try things out. The added complexity is obviously over my head, but if I can understand it better I'll give it another go.

Here's the build info:

Build:

  • HP prodesk 600g1
  • intel i5
  • upgraded 32gb after-market DDR3 1600mhz Patriot Ram
  • KingFlash 250gb SSD
  • WD 4T SSD (originally NTFS drive from my windows pc with ~2T of data existing)
  • WD 4T HDD (bought this after the SSD corrupted, so i could get the server back up while i delt with the SSD)
  • 500Mbps ethernet connection

Hypervisor

  • Proxmox (latest), Ubuntu kernel
  • VM110: Ubuntu-22.04.3-live server amd64, OpenMediaVault 6.5.0
  • VM130: Ubuntu-22.04.3-live, docker engine, portainer
    • Containers: Gluetun, qBittorrent, Sonarr, Radarr, Prowlarr)
  • LCX101: Ubuntu-22.04.3, Plex Server
  • Allocations
  • VM110: 4gb memory, 2 cores (balooning and swap ON)
  • VM130: 30gb memory, 4 cores (ballooning and swap ON)

Shared Media Architecture (attempt 1)

  • Direct-mounted the WD SSD to VM110. Partitioned and formatted the file system inside the GUI, created a folder share, set permissions for my share user. Shared as an SMB/CIFS
  • bind-mounted the shared folder to a local folder in VM130 (/media/data)
  • passed the mounted folder to the necessary docker containers as volumes in the docker-compose file (e.g. - volumes: /media/data:/data, ect)

No shame in being told I did something incredibly dumb, i'm here to learn, anyway. Maybe just not learn in a way that destroys 6 months of dvd rips in the process ___

view more: next ›

archomrade

joined 1 year ago