sorted by: new top controversial old
18
[-] xtremeownage@lemmyonline.com 1 points 3 months ago

46 watts.. but, yea, I expected lower.

But, suppose when its spinning 4x seagate exos, they like their juice.

It apparently doesn't allow HDD hibernation while containers are running, and doesn't appear to like to use any sleep states.

[-] xtremeownage@lemmyonline.com 1 points 3 months ago

Key word, is idle.

Synology... and HDD hibernation don't really go together very well. If you have containers running, it won't let the HDDs hibernate at all. And- I have a minio instance running.

25
[-] xtremeownage@lemmyonline.com 1 points 3 months ago

Unless you have a wall of old nokia phones..... it should be quite scary.

[-] xtremeownage@lemmyonline.com 16 points 3 months ago

Eh... Mutually assured destruction.

It's a very scary phrase.

[-] xtremeownage@lemmyonline.com 3 points 4 months ago

That, is a pretty good deal. Better start picking up some MD1200s!

[-] xtremeownage@lemmyonline.com 1 points 6 months ago

Nope, not at all.

Behind every successful story, is a lot of failures. (or- really rich parents).

[-] xtremeownage@lemmyonline.com 4 points 7 months ago

I agree, I'd be picking up a bunch of those, if that were the case.

[-] xtremeownage@lemmyonline.com 13 points 7 months ago

esp32-c6 (supports zigbee), is pretty cheap.

[-] xtremeownage@lemmyonline.com 8 points 7 months ago

No.

I wouldn't vote for Hillary period for many reasons. Her sex is not one of them.

A random fact, I actually did vote for a woman to be president. But, it damn sure was not Hillary. There is too much stink associated with her. Too much shit swept under the rug.

[-] xtremeownage@lemmyonline.com 3 points 10 months ago

The other admin now "owns" this instance, and hosts it in the EU.

I am just a glorified moderator now.

[-] xtremeownage@lemmyonline.com 12 points 10 months ago

I'd say, you have a small instance.

I used to host lemmyonline.com, which has somewhere around 50-100 users.

It was upwards of 50-80g of disk space, and it used a pretty good chunk of bandwidth. CPU/Memory requirements were not very high though.

[-] xtremeownage@lemmyonline.com 6 points 10 months ago

I'd gladly donate a few TB, but Not about to fill my entire array for books i'll never read...

193
submitted 1 year ago* (last edited 1 year ago) by xtremeownage@lemmyonline.com to c/technology@lemmy.world

Both CloudNordic and Azero said that they were working to rebuild customers’ web and email systems from scratch, albeit without their data.

Yea.... Don't bother. But, do expect to hear from my lawyers.....

CloudNordic said that it “had no knowledge that there was an infection.” CloudNordic and Azero are owned by Denmark-registered Certiqa Holding, which also owns Netquest, a provider of threat intelligence for telcos and governments.

Edit-

https://www.cloudnordic.com/

419

Knock on wood, I have not used them in quite a while.

26
submitted 1 year ago* (last edited 1 year ago) by xtremeownage@lemmyonline.com to c/selfhosted@lemmy.world

My adventures in building out a ceph cluster for proxmox storage.

As a random note, my particular instance (lemmyonline.com) is hosted on that particular ceph cluster.

239

I can't say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which... is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What's the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs....

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

30

So, last month, my kubernetes cluster decided to literally eat shit while I was out on a work conference.

When I returned, I decided to try something a tad different, by rolling out proxmox to all of my servers.

Well, I am a huge fan of hyper-converged, and clustered architectures for my home network / lab, so, I decided to give ceph another try.

I have previously used it in the past with relative success with Kubernetes (via rook/ceph), and currently leverage longhorn.

Cluster Details

  1. Kube01 - Optiplex SFF
  • i7-8700 / 32G DDR4
  • 1T Samsung 980 NVMe
  • 128G KIOXIA NVMe (Boot disk)
  • 512G Sata SSD
  • 10G via ConnectX-3
  1. Kube02 - R730XD
  • 2x E5-2697a v4 (32c / 64t)
  • 256G DDR4
  • 128T of spinning disk.
  • 2x 1T 970 evo
  • 2x 1T 970 evo plus
  • A few more NVMes, and Sata
  • Nvidia Tesla P4 GPU.
  • 2x Google Coral TPU
  • 10G intel networking
  1. Kube05 - HP z240
  • i5-6500 / 28G ram
  • 2T Samsung 970 Evo plus NVMe
  • 512G Samsung boot NVMe
  • 10G via ConnectX-3
  1. Kube06 - Optiplex Micro
  • i7-6700 / 16G DDR4
  • Liteon 256G Sata SSD (boot)
  • 1T Samsung 980

Attempt number one.

I installed and configured ceph, using Kube01, and Kube05.

I used a mixture of 5x 970 evo / 970 evo plus / 980 NVMe drives, and expected it to work pretty decently.

It didn't. The IO was so bad, it was causing my servers to crash.

I ended up removing ceph, and using LVM / ZFS for the time being.

Here are some benchmarks I found online:

https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit#gid=0

https://www.proxmox.com/images/download/pve/docs/Proxmox-VE_Ceph-Benchmark-202009-rev2.pdf

The TLDR; after lots of research- Don't use consumer SSDs. Only use enterprise SSDs.

Attempt / Experiment Number 2.

I ended up ordering 5x 1T Samsung PM863a enterprise sata drives.

After, reinstalling ceph, I put three of the drives into kube05, and one more into kube01 (no ports / power for adding more then a single sata disk...).

And- put the cluster together. At first, performance wasn't great.... (but, was still 10x the performance of the first attempt!). But, after updating the crush map to set the failure domain to OSD rather then host, performance picked up quite dramatically.

This- is due to the current imbalance of storage/host. Kube05 has 3T of drives, Kube01 has 1T. No storage elsewhere.

BUT.... since this was a very successful test, and it was able to deliver enough IOPs to run my I/O heavy kubernetes workloads.... I decided to take it up another step.

A few notes-

Can you guess which drive is the samsung 980 EVO, and which drives are enterprise SATA SSDs? (look at the latency column)

Future - Attempt #3

The next goal, is to properly distribute OSDs.

Since, I am maxed out on the number of 2.5" SATA drives I can deploy... I picked up some NVMe.

5x 1T Samsung PM963 M.2 NVMe.

I picked up a pair of dual-spot half-height bifurcation cards for Kube02. This will allow me to place 4 of these into it, with dedicated bandwidth to the CPU.

The remaining one, will be placed inside of Kube01, to replace the 1T samsung 980 NVMe.

This should give me a pretty decent distribution of data, and with all enterprise drives, it should deliver pretty acceptable performance.

More to come....

236
submitted 1 year ago* (last edited 1 year ago) by xtremeownage@lemmyonline.com to c/selfhosted@lemmy.world

Since, my doctor recommend that I put more fiber in my diet- I decided to comply.

So.... in a few hours, I will be running a few OS2 runs across my house, with 10G LR SFP+ modules.

Both runs will be from my rack to the office. One run will be dedicated for the incoming WAN connection (Coupled with the existing fiber that.... I don't want to re terminate). The other, will be replacing the 10G copper run already in place, to save 10 or 20w of energy.

This, was sparked due to a 10GBase-T module overheating, and becoming very intermittent earlier this week causing a bunch of issues. After replacing the module, links came back up and started working normally.... but... yea, I need to replace the 10G copper links.

With only twinax and fiber 10G links plugged into my 8-port aggregation switch, it is only pulling around 5 watts, which is outstanding, given a single 10GBase-T module uses more then that.

Edit,

Also, I ordered the wrong modules. BUT... the hard part of running the fiber is done!

138
Single Threaded Workload (lemmyonline.com)

Yup. always gotta be that one single threaded program. In this case, appears to be frigate.

5
1
Steam Code Giveaway! (lemmyonline.com)

Giveaway #1 was completed in !gaming@beehaw.org

Giveaway #2, is in !lemmyonline@lemmyonline.com at THIS POST

Results will be announced monday around noon CST

2
submitted 1 year ago* (last edited 1 year ago) by xtremeownage@lemmyonline.com to c/homelab@lemmy.ml

House/city got hit by a 115 mph wind gust, taking out power to most of the city last night.

Knocked down all of my trees, wrecked havoc on the city. Messed up roofs everywhere, and most importantly, no power for anyone!

https://www.koco.com/article/oklahoma-severe-storms-weather-tornado-hail-saturday/44233131

My city/utility estimates it may take up to a week to get power restored. But, thankfully, I have spent a lot of time preparing for this event.

As- my MAIN house batteries were only charged up to 50/60% when the wind hit- AND, I had a misconfiguration on my primary inverter causing them to shutdown at 20% (Rather then 10%), the first night, the entire house ran on battery from 11pm up to 7:20am. This includes, running the A/C, running my rack of servers. All of it.

I have a constant, 500w load from my servers. I have a optiplex micros in my kubernetes farm, I have a r730xd pushing 256g of ram, 32c/64t, Tesla P2 GPU, and a whopping 130TB of raw storage (before redundancy).

So- at 7:20am, the main inverter shuts down due to hitting its battery shut off limit. Of course, this means, my A/C and fan shuts off, causing me to wake up pretty quickly. The majority of the house is out- but, not my rack!

My rack is still plugged into my homemade 2.4Kwh UPS I built a few years back.

So, after getting up, grabbing my coffee, I went ahead and plugged in the generator, which got everything turned back on until the sun came back out. Once the sun came out, the steady 3-5kwh of solar PV power kept everything running, and put its extra juice back into the batteries for later.

During the day- the entire house, and rack of servers was able to run off of pure sunshine without issue.

Around an hour or ago, when the sunlight went away, I went ahead and plugged in the generator to get the batteries topped off for another night without grid power.

Home assistant automation yells at me when the batteries are full, to tell me to go and turn off the generator. When the batteries get low, it yells at me to go start the generator. (Literally- it talks to me via TTS).

That being said, I have 5 gals of gas in the generator, another 5 gals on standby. And, can always go and get more if needed. But- that should be enough to keep my entire lab running for the rest of the week. .. between generator power, and PV/Solar energy.

If- you are interested in an overview of how my solar setup works, its all documented here:

https://static.xtremeownage.com/pages/Projects/Solar-Project/

Edit- also, if https://xtremeownage.com/ and https://lemmyonline.com/ are still working- my Lab is still powered.

view more: next ›

xtremeownage

joined 1 year ago