sorted by: new top controversial old
[-] matthewc@lemmy.self-host.site 3 points 1 year ago

I spin up a lot of Docker containers with large data sets locally.

[-] matthewc@lemmy.self-host.site 7 points 1 year ago

Developer here. Completely depends on your workflow.

I went base model and the only thing I regret is not getting more RAM.

Speeds have been phenomenal when there binaries are native. Speeds have been good when the binaries are running through Rosetta.

The specs you’re wavering between are extremely workflow specific. You know if your workflow requires the 16 extra GPU cores. You know if your workflow requires another 64 GB of RAM.

matthewc

joined 1 year ago