377
Machine Learning (programming.dev)
you are viewing a single comment's thread
view the rest of the comments
[-] marcos@lemmy.world 55 points 5 months ago

No, this is because the testing set can be derived from the training set.

Overfitting alone can't get you to 1.

[-] victorz@lemmy.world 10 points 5 months ago

So as an eli5, that's basically that you have to "ask" it stuff it has never heard before? AI has come after my time in higher education.

[-] marcos@lemmy.world 20 points 5 months ago

Yes.

You train it on some data, and ask it about different data. Otherwise it just hard-codes the answers.

[-] Morphit@feddit.uk 7 points 5 months ago

They're just like us.

[-] victorz@lemmy.world 1 points 5 months ago

Gotcha, thank you!

[-] ArtVandelay@lemmy.world 3 points 5 months ago

Yes, it's called a train test split, and is often 80/20 or there about

[-] sevenapples@lemmygrad.ml 3 points 5 months ago

It can if you don't do a train-test split.

But even if you consider the training set only, having zero loss is definitely a bad sign.

[-] GissaMittJobb@lemmy.ml 2 points 5 months ago
this post was submitted on 26 Mar 2024
377 points (96.5% liked)

Programmer Humor

19187 readers
1094 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS