Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    4
    ·
    2 months ago

    Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot. Imagine what would happen if a tiny fraction of the billions spent to develop this technology went into funding improved traditional instruction.

    Better paid teachers, better resources, studies geared at optimizing traditional instruction, etc.

    Move fast and break things was always a stupid goal. Turbocharging it with all this money is killing the tried and true options that actually produce results, while straining the power grid and worsening global warming.

      • elvith@feddit.org
        link
        fedilink
        English
        arrow-up
        36
        arrow-down
        1
        ·
        2 months ago

        It’s the other way round: Education makes for less gullible people and for workers that demand more rights more freely and easily - and then those are coming for their yachts…

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      Imagine all the money spent on war would be invested into education 🫣what a beautiful world we would live in.

    • otp@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      9
      ·
      2 months ago

      Traditional instruction gave the same result as a bleeding edge ChatGPT tutorial bot.

      Interesting way of looking at it. I disagree with your conclusion about the study, though.

      It seems like the AI tool would be helpful for things like assignments rather than tests. I think it’s intellectually dishonest to ignore the gains in some environments because it doesn’t have gains in others.

      You’re also comparing a young technology to methods that have been adapted over hundreds of thousands of years. Was the first automobile entirely superior to every horse?

      I get that some people just hate AI because it’s AI. For the people interested in nuance, I think this study is interesting. I think other studies will seek to build on it.

    • LifeInMultipleChoice@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      20
      ·
      2 months ago

      “tests designed for use by people who don’t use chatgpt is performed by people who don’t”

      This is the same fn calculator argument we had 20 years ago.

      A tool is a tool. It will come in handy, but if it will be there in life, then it’s a dumb test

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        3
        ·
        2 months ago

        The point of learning isn’t just access to that information later. That basic understanding gets built on all the way up through the end of your education, and is the base to all sorts of real world application.

        There’s no overlap at all between people who can’t pass a test without an LLM and people who understand the material.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Also actual mathematicians are pretty much universally capable of doing many calculations to reasonable precision in their head, because internalizing the relationships between numbers and various mathematical constructs is necessary to be able to reason about them and use them in more than trivial ways.

          Tests for recall aren’t because the specific piece of information is the point. They’re because being able to retrieve the information is essential to integrate it into scenarios where you can utilize it, just like being able to do math without a calculator is needed to actually apply math in ways that aren’t prescribed for you.

      • bluewing@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        As someone who has taught math to students in a classroom, unless you have at least a basic understanding of HOW the numbers are supposed to work, the tool - a calculator - is useless. While getting the correct answer is important, I was more concerned with HOW you got that answer. Because if you know how you got that answer, then your ability to get the correct answer skyrockets.

        Because doing it your way leads to blindly relying on AI and believing those answers are always right. Because it’s just a tool right?

  • Insig@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    2 months ago

    At work we give a 16/17 year old, work experience over the summer. He was using chatgpt and not understanding the code that was outputing.

    I his last week he asked why he doing print statement something like

    print (f"message {thing} ")

  • glowie@h4x0r.host
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    9
    ·
    2 months ago

    Of all the students in the world, they pick ones from a “Turkish high school”. Any clear indication why there of all places when conducted by a US university?

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 months ago

      I’m guessing there was a previous connection with some of the study authors.

      I skimmed the paper, and I didn’t see it mention language. I’d be more interested to know if they were using ChatGPT in English or Turkish, and how that would affect performance, since I assume the model is trained on significantly more English language data than Turkish.

      • Em Adespoton@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        2 months ago

        GPTs are designed with translation in mind, so I could see it being extremely useful in providing me instruction on a topic in a non-English native language.

        But they haven’t been around long enough for the novelty factor to wear off.

        It’s like computers in the 1980s… people played Oregon Trail on them, but they didn’t really help much with general education.

        Fast forward to today, and computers are the core of many facets of education, allowing students to learn knowledge and skills that they’d otherwise have no access to.

        GPTs will eventually go the same way.

    • Phoenix3875@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      The paper only says it’s a collaboration. It’s pretty large scale, so the opportunity might be rare. There’s a chance that (the same or other) researchers will follow up and experiment in more schools.

    • Lemminary@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      9
      ·
      2 months ago

      If I had access to ChatGPT during my college years and it helped me parse things I didn’t fully understand from the texts or provided much-needed context for what I was studying, I would’ve done much better having integrated my learning. That’s one of the areas where ChatGPT shines. I only got there on my way out. But math problems? Ugh.

      • ForgotAboutDre@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        2 months ago

        When you automate these processes you lose the experience. I wouldn’t be surprised if you couldn’t parse information as well as you can now, if you had access to chat GPT.

        It’s had to get better at solving your problems if something else does it for you.

        Also the reliability of these systems is poor, and they’re specifically trained to produce output that appears correct. Not actually is correct.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          2 months ago

          I quickly learned how ChatGPT works so I’m aware of its limitations. And since I’m talking about university students, I’m fairly sure those smart cookies can figure it out themselves. The thing is, studying the biological sciences requires you to understand other subjects you haven’t learned yet, and having someone explain how that fits into the overall picture puts you way ahead of the curve because you start integrating knowledge earlier. You only get that from retrospection once you’ve passed all your classes and have a panoramic view of the field, which, in my opinion, is too late for excellent grades. This is why I think having parents with degrees in a related field or personal tutors gives an incredibly unfair advantage to anyone in college. That’s what ChatGPT gives you for free. Your parents and the tutors will also make mistakes, but that doesn’t take away the value which is also true for the AIs.

          And regarding the output that appears correct, some tools help mitigate that. I’ve used the Consensus plugin to some degree and think it’s fairly accurate for resolving some questions based on research. What’s more useful is that it’ll cite the paper directly so you can learn more instead of relying on ChatGPT alone. It’s a great tool I wish I had that would’ve saved me so much time to focus on other more important things instead of going down the list of fruitless search results with a million tabs open.

          One thing I will agree with you is probably learning how to use Google Scholar and Google Books and pirating books using the library to find the exact information as it appears in the textbooks to answer homework questions which I did meticulously down to the paragraph. But only I did that. Everybody else copied their homework, so at least in my university it was a personal choice how far you wanted to take those skills. So now instead of your peers giving you the answers, it’s ChatGPT. So my question is, are we really losing anything?

          Overall I think other skills need honing today, particularly verifying information, together with critical thinking which is always relevant. And the former is only hard because it’s tedious work, honestly.

        • Veddit@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 months ago

          I read that comment, and use it similarly, as more a super-dictionary/encyclopedia in the same way I’d watch supplementary YouTube videos to enhance my understanding. Rather than automating the understanding process.

          More like having a tutor who you ask all the too-stupid and too-hard questions to, who never gets tired or fed up with you.

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            Exactly this! That is why I always have at least one instance of AI chatbot running when I am coding or better said analyse code for debugging.

            It makes it possible to debug kernel stuff without much pre-knowledge, if you are proficient in prompting your questions. Well, it did work for me.

  • Praise Idleness@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    7
    ·
    2 months ago

    It’s not about using it. It’s about using it ina helpful and constructive manner. Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

    LLM has been a wonderful tool for me to further understand various topics.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      7
      ·
      2 months ago

      This! Don’t blame the tech, blame the grown ups not able to teach the young how to use tech!

    • Ledivin@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      2 months ago

      Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

      You should try reading the article instead of just the headline.

      • Praise Idleness@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 months ago

        The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer. Students were not building the skills that come from solving the problems themselves.

        I did? What are you trying to say?

    • trollbearpig@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      2 months ago

      If you actually read the article you will see that they tested both allowing the students to ask for answers from the LLM, and then limiting the students to just ask for guidance from the LLM. In the first case the students did significantly worse than their peers that didn’t use the LLM. In the second one they performed the same as students who didn’t use it. So, if the results of this study can be replicated, this shows that LLMs are at best useless for learning and most likely harmful. Most students are not going to limit their use of LLMs for guidance.

      You AI shills are just ridiculous, you defend this technology without even bothering to read the points under discussion. Or maybe you read an LLM generated summary? Hahahaha. In any case, do better man.

    • ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      If you’d have read tye article, you would have learned that there were three groups, one with no gpt, one where they just had gpt access, and another gpt that would only give hints and clues to the answer, but wouldn’t directly give it.

      That third group tied the first group in test scores. The issue was that chat gpt is dumb and was often giving incorrect instructions on how to solve the answer, or came up with the wrong answer. I’m sure if gpt were capable of not giving the answer away and actually correctly giving instructions on how to solve each problem, that group would have beaten the no gpt group, easily.

  • maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    4
    ·
    2 months ago

    Yea, this highlights a fundamental tension I think: sometimes, perhaps oftentimes, the point of doing something is the doing itself, not the result.

    Tech is hyper focused on removing the “doing” and reproducing the result. Now that it’s trying to put itself into the “thinking” part of human work, this tension is making itself unavoidable.

    I think we can all take it as a given that we don’t want to hand total control to machines, simply because of accountability issues. Which means we want a human “in the loop” to ensure things stay sensible. But the ability of that human to keep things sensible requires skills, experience and insight. And all of the focus our education system now has on grades and certificates has lead us astray into thinking that the practice and experience doesn’t mean that much. In a way the labour market and employers are relevant here in their insistence on experience (to the point of absurdity sometimes).

    Bottom line is that we humans are doing machines, and we learn through practice and experience, in ways I suspect much closer to building intuitions. Being stuck on a problem, being confused and getting things wrong are all part of this experience. Making it easier to get the right answer is not making education better. LLMs likely have no good role to play in education and I wouldn’t be surprised if banning them outright in what may become a harshly fought battle isn’t too far away.

    All that being said, I also think LLMs raise questions about what it is we’re doing with our education and tests and whether the simple response to their existence is to conclude that anything an LLM can easily do well isn’t worth assessing. Of course, as I’ve said above, that’s likely manifestly rubbish … building up an intelligent and capable human likely requires getting them to do things an LLM could easily do. But the question still stands I think about whether we need to also find a way to focus more on the less mechanical parts of human intelligence and education.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      LLMs likely have no good role to play in education and I wouldn’t be surprised if banning them outright in what may become a harshly fought battle isn’t too far away.

      While I agree that LLMs have no place in education, you’re not going to be able to do more than just ban them in class unfortunately. Students will be able to use them at home, and the alleged “LLM detection” applications are no better than throwing a dart at the wall. You may catch a couple students, but you’re going to falsely accuse many more. The only surefire way to catch them is them being stupid and not bothering to edit what they turn in.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    9
    ·
    2 months ago

    I’m not entirely sold on the argument I lay out here, but this is where I would start were I to defend using chatGPT in school as they laid out in their experiment.

    It’s a tool. Just like a calculator. If a kid learns and does all their homework with a calculator, then suddenly it’s taken away for a test, of course they will do poorly. Contrary to what we were warned about as kids though, each of us does carry a calculator around in our pocket at nearly all times.

    We’re not far off from having an AI assistant with us 24/7 is feasible. Why not teach kids to use the tools they will have in their pocket for the rest of their lives?

    • Schal330@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      2 months ago

      As adults we are dubious of the results that AI gives us. We take the answers with a handful of salt and I feel like over the years we have built up a skillset for using search engines for answers and sifting through the results. Kids haven’t got years of experience of this and so they may take what is said to be true and not question the results.

      As you say, the kids should be taught to use the tool properly, and verify the answers. AI is going to be forced onto us whether we like it or not, people should be empowered to use it and not accept what it puts out as gospel.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 months ago

        This is true for the whole internet, not only AI Chatbots. Kids need to get teached that there is BS around. In fact kids had to learn that even pre-internet. Every human has to learn that you can not blindly trust anything, that one has to think critically. This is nothing new. AI chatbots just show how flawed human education is these days.

    • filister@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 months ago

      I think here you also need to teach your kid not to trust unconditionally this tool and to question the quality of the tool. As well as teaching it how to write better prompts, this is the same like with Google, if you put shitty queries you will get subpar results.

      And believe me I have seen plenty of tech people asking the most lame prompts.

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        2 months ago

        I remember teachers telling us not to trust the calculators. What if we hit the wrong key? Lol

        Some things never change.

        • Deceptichum@quokk.au
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          2 months ago

          I remember the teachers telling us not to trust Wikipedia, but they had utmost faith in the shitty old books that were probably never verified by another human before being published.

            • Deceptichum@quokk.au
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              3
              ·
              2 months ago

              Eh I find they’re usually from a more direct source. The schoolbooks are just information sourced from who knows where else.

              • qarbone@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 months ago

                I don’t know about your textbooks and what ages you’re referring to but I remember many of my technical textbooks had citations in the back.

                • bluewing@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  Yep, students these days have no idea about the back of their books and how useful the index can be and the citations after that.

                  Even after repeatedly pointing it out, they still don’t make use of it. Despite the index being nearly a cheat code in itself.

  • Saki@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    5
    ·
    2 months ago

    I mean, is it really that surprising? You’re not analyzing anything, an algorithm just spits text at you. You’re not gonna learn much from that.

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      2 months ago

      In the study they said they used a modified version that acted as a tutor, that refused to give direct answers and gave hints to the solution instead.

      • Lobreeze@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        2 months ago

        That’s like cheating with extra steps.

        Ain’t getting hints on your in class exam.

  • 2ugly2live@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    I don’t even know of this is ChatGPT’s fault. This would be the same outcome if someone just gave them the answers to a study packet. Yes, they’ll have the answers because someone (or something) gave it to them, but won’t know how to get that answer without teaching them. Surprise: For kids to learn, they need to be taught. Shocker.

  • Ilandar@aussie.zone
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    2 months ago

    What do the results of the third group suggest? AI doesn’t appear to have hindered their ability to manage by themselves under test conditions, but it did help them significantly with their practice results. You could argue the positive reinforcement an AI tutor can provide during test preparations might help some students with their confidence and pre-exam nerves, which will allow them to perform closer to their best under exam conditions.

  • vin@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 months ago

    Did those using tutor AI spend less time on learning? That would have been worth measuring

  • MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Something I’ve noticed with institutional education is that they’re not looking for the factually correct answer, they’re looking for the answer that matches whatever you were told in class. Those two things should not be different, but in my experience, they’re not always the same thing.

    I have no idea if this is a factor here, but it’s something I’ve noticed. I have actually answered questions with a factually wrong answer, because that’s what was taught, just to get the marks.

  • ???@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago

    Yeh because it’s just like having their dumb parents do homework for them

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Paradoxically, they would probably do better if the AI hallucinated more. When you realize your tutor is capable of making mistakes, you can’t just blindly follow their process; you have to analyze and verify their work, which forces a more complete understanding of the concept, and some insight into what errors can occur and how they might affect outcomes.