sorted by: new top controversial old
[-] manitcor@lemmy.intai.tech 36 points 1 year ago

plenty of content on my screen, I do admit Im posting less everywhere right now while hunting for a job.

[-] manitcor@lemmy.intai.tech 12 points 1 year ago* (last edited 1 year ago)

the computer wrote the 2nd one on accident when someone asked it to bake a cake.

[-] manitcor@lemmy.intai.tech 20 points 1 year ago

updates are federated so really its just a matter of the client changing behavior a bit.

[-] manitcor@lemmy.intai.tech 1 points 1 year ago

switching to a khadas vim4 myself.

13
submitted 1 year ago* (last edited 1 year ago) by manitcor@lemmy.intai.tech to c/technology@beehaw.org
[-] manitcor@lemmy.intai.tech 10 points 1 year ago

was about to include it in my stack, guess i wont be now.

[-] manitcor@lemmy.intai.tech 2 points 1 year ago

should be fine, if you don't like how warm it gets a set of small heatsinks for amplifiers will run you a few bucks and takes all of 10 seconds to install.

[-] manitcor@lemmy.intai.tech 5 points 1 year ago* (last edited 1 year ago)

i like both the argon and the simple heatsink setups, either work great. i did end up adding an additional heatsink to the argon, the flat case does not provide great heat exchange in an enclosed space.

you can do passive cooling as well, just all depends on how hot the location gets.

[-] manitcor@lemmy.intai.tech 5 points 1 year ago* (last edited 1 year ago)

old floppy disks of different sizes. the bottom looks like 5 1/4" the ones on top with the metal centers are all 3 1/2". Both standards needed sleeves to be read. Many of these are likely trash now but that wouldn't stop me from trying to load them.

[-] manitcor@lemmy.intai.tech 7 points 1 year ago

why do this to yourself?

[-] manitcor@lemmy.intai.tech 0 points 1 year ago

as time goes on i think techs that mark human made content will be more practical.

the only reason that read as "off" is because the poster did not put any time into it, prob just a simple question in a default chat somewhere. well made systems tuned to thier use are going to be surprisingly effective.

[-] manitcor@lemmy.intai.tech 3 points 1 year ago

i think it may have been. might as well, even if you put in time writing its likely to be assumed AI anyway, esp as it improves.

1
[-] manitcor@lemmy.intai.tech 0 points 1 year ago

so wait, its better to say these people are knowingly choosing to be on these platforms and requiring others to do so communicate with them as well? im not even sure what you are saying but it does seem we may be at cross purposes

we have come a long way since breaking up the bells, wow

is your instance seeing everything?

im not supporting OPs post, if you look at the thread this is a reply to someone trying to equate these choices people are making to the lack of choice people have in the carbon argument https://lemmy.intai.tech/comment/632241

which to me is a watering down of the carbon argument where people truly have no choice vs having put themselves in a mental box for whatever the reason.

16
1
1
2
-2
You can do anything (www.zombo.com)
4
Supplies! (media.tenor.com)
1
1
GPT-4's details are leaked. (threadreaderapp.com)

cross-posted from: https://lemmy.intai.tech/post/72919

Parameters count:

GPT-4 is more than 10x the size of GPT-3. We believe it has a total of ~1.8 trillion parameters across 120 layers. Mixture Of Experts - Confirmed.

OpenAI was able to keep costs reasonable by utilizing a mixture of experts (MoE) model. They utilizes 16 experts within their model, each is about ~111B parameters for MLP. 2 of these experts are routed to per forward pass.

Related Article: https://lemmy.intai.tech/post/72922

1

cross-posted from: https://lemmy.world/post/1005176

Source code for the original Far Cry, released back in 2004, has popped up online.Entitled "Far Cry 1.34 Complete", the…

5

cross-posted from: https://lemmy.intai.tech/post/43759

cross-posted from: https://lemmy.world/post/949452

OpenAI's ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models

view more: next ›

manitcor

joined 1 year ago