228
submitted 2 months ago by ArcticDagger@feddit.dk to c/science@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] 0laura@lemmy.world 2 points 2 months ago* (last edited 2 months ago)

no, not really. the improvement gets less noticeable as it approaches the limit, but I'd say the speed at which it improves is still the same. especially smaller models and context window size. there's now models comparable to chatgpt or maybe even gpt 4.0 (I don't remember, one or the other) with context window size of 128k tokens, that you can run on a GPU with 16gb of vram. 128k tokens is around 90k words I think. that's more than 4 bee movie scripts. it can "comprehend" all of that at once.

this post was submitted on 26 Jul 2024
228 points (96.7% liked)

science

14563 readers
1115 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS