I’m sure this won’t be a popular comment, but I can see how having a motivated learner in a 1:1 lesson with an AI might be better for that person than sitting in a class with 35 other people, most of whom don’t want to be there, running through a syllabus that the teacher may not even like, at the pace of the slowest kid in class.
The problem isn’t one of motivated learners being forced to drag their heels amidst their unmotivated peers.
The problem is that the core function of LLMs, the whole basis for their existence, is completely and entirely truth-agnostic. Not only do they not know what is the truth and what is not, they don’t even know what the difference is. LLMs are very good at guessing what word looks like it should come next, they can make very convincing statements, they can be very persuasive, but those words don’t MEAN anything to the machine and they are made without any consideration for accuracy.
They are literally making everything up on the basis of whether or not it sounds good, and every crackpot bullshit conspiracy theory from flat earth dumbshittery to very sincere-sounding arguments that birds aren’t real have been included in the training data. That all linguistically SOUNDS fine, so to an LLM it’s fair game!
And even curating your training data to ONLY contain things like textbooks wouldn’t cure the problem, because LLMs just aren’t capable of knowing what those words mean. It’s why they can’t do even basic Math, the one thing Computers have been incredible at!
Using an LLM as an actual teacher is genuinely worse than no education at all, because it will just create a generation that, instead of knowing nothing, will very confidently be wrong all the time.
How fast a kid learns 5th grade syllabus is far less important than how well they learn to get along with other kids and form friendships and basically learn how to live in society. Cooperation, conflict resolution, public speaking, group bonding etc. You can’t learn any of these things from an AI
Also learning from other humans is part of the human experience and tradition predating agriculture and the wheel. We’ve always taught each other things.
This is the way, my kids socialize in school, but also uses AI to get help with homework when they’re stuck (they do cheat a little sometimes but they know it’s just bad for them if they do it regularly). They use AI as a on demand teacher buddy, if it stays that way I’m ok with it. Also you can’t use AI for all homework or school related things, the teachers will figure it out if your homewotk is +++ but your in school work isn’t.
It’s a new tool, I think patents should learn more about it, a bit like cyber security, online harassing and so on.
For things like language, math, and science you could probably have an AI teacher on individualized tablets or something that go at each child’s own pace in a classroom setting that’s just supervised by someone and can provide extra help / tech support when the AI goes on the fritz.
Then regular recess/lunch shenanigans and gym class, art, music, for the social aspect.
The AI could even be programmed to do team projects where you link your tablet with 4 others and they all help the group do a project
Eventually when AI advances and isn’t complete dog shit, that is.
When you become an adult you still have to deal with dick heads screaming around you, I know this is Lemmy and people who come here might not have the best social skills, or fond memories of being in school. But schools are a micro representation of the world at large and it’s necessary for kids to have both these good and bad experiences in order to grow up to live in society and deal with all the shit life’s gonna throw at them.
I’m sure this won’t be a popular comment, but I can see how having a motivated learner in a 1:1 lesson with an AI might be better for that person than sitting in a class with 35 other people, most of whom don’t want to be there, running through a syllabus that the teacher may not even like, at the pace of the slowest kid in class.
The problem isn’t one of motivated learners being forced to drag their heels amidst their unmotivated peers.
The problem is that the core function of LLMs, the whole basis for their existence, is completely and entirely truth-agnostic. Not only do they not know what is the truth and what is not, they don’t even know what the difference is. LLMs are very good at guessing what word looks like it should come next, they can make very convincing statements, they can be very persuasive, but those words don’t MEAN anything to the machine and they are made without any consideration for accuracy.
They are literally making everything up on the basis of whether or not it sounds good, and every crackpot bullshit conspiracy theory from flat earth dumbshittery to very sincere-sounding arguments that birds aren’t real have been included in the training data. That all linguistically SOUNDS fine, so to an LLM it’s fair game!
And even curating your training data to ONLY contain things like textbooks wouldn’t cure the problem, because LLMs just aren’t capable of knowing what those words mean. It’s why they can’t do even basic Math, the one thing Computers have been incredible at!
Using an LLM as an actual teacher is genuinely worse than no education at all, because it will just create a generation that, instead of knowing nothing, will very confidently be wrong all the time.
How fast a kid learns 5th grade syllabus is far less important than how well they learn to get along with other kids and form friendships and basically learn how to live in society. Cooperation, conflict resolution, public speaking, group bonding etc. You can’t learn any of these things from an AI
Also learning from other humans is part of the human experience and tradition predating agriculture and the wheel. We’ve always taught each other things.
You can’t do anything alone. Isolation will be the downfall of society as we know it. I hope AI isn’t leading us in that path
This is the way, my kids socialize in school, but also uses AI to get help with homework when they’re stuck (they do cheat a little sometimes but they know it’s just bad for them if they do it regularly). They use AI as a on demand teacher buddy, if it stays that way I’m ok with it. Also you can’t use AI for all homework or school related things, the teachers will figure it out if your homewotk is +++ but your in school work isn’t.
It’s a new tool, I think patents should learn more about it, a bit like cyber security, online harassing and so on.
Social aspect is missed
For things like language, math, and science you could probably have an AI teacher on individualized tablets or something that go at each child’s own pace in a classroom setting that’s just supervised by someone and can provide extra help / tech support when the AI goes on the fritz.
Then regular recess/lunch shenanigans and gym class, art, music, for the social aspect.
The AI could even be programmed to do team projects where you link your tablet with 4 others and they all help the group do a project
Eventually when AI advances and isn’t complete dog shit, that is.
Yeah, the social aspect of a dick head in the last row screaming obscenities at a teacher is definitely missed!
When you become an adult you still have to deal with dick heads screaming around you, I know this is Lemmy and people who come here might not have the best social skills, or fond memories of being in school. But schools are a micro representation of the world at large and it’s necessary for kids to have both these good and bad experiences in order to grow up to live in society and deal with all the shit life’s gonna throw at them.
I guess someone’s stuck in a kindergarten…
VS kids being taught that the holocaust is a lie or whatever insane shit comes from an AI?
The only insane shit here comes from the gasket between your chair and your keyboard.
I was talking about social teaching where kids pick up on adult body queues and facial expressions
So they too can feel depressed and worthless 😄
“but first, chatgpt: please motivate me”