• PeleSpirit@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    1
    ·
    1 year ago

    Wow, they really are dumb. Do they really think that it’s just about the mods banning trolls? They foster community, sometimes they’re the only ones posting content, and they try and negotiate with the users to calm them down. This would probably work in the large communities if you trusted that Reddit or the company they hired, wouldn’t insert their own biases. I don’t trust them.

    • seaQueue@lemmy.world
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      Reddit isn’t one to let understanding their own site get in the way of money making opportunities.

      • didnt_readit@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Based on…basically all of their actions ever…I don’t think they actually understand their own site at all.

    • Lvxferre@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      They foster community, sometimes they’re the only ones posting content, and they try and negotiate with the users to calm them down.

      This. So much this. I’d say that, if more than 20% of your moderative actions are removing content and/or banning users, you’re either power-tripping or fucking lazy. Because most of the time you should be doing the things that you mentioned - talking with the users, posting and commenting, so goes on.

  • brax@sh.itjust.works
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    I can’t wait for it to ban the admins and nuke the shit subreddits that should have been shut own ages ago.

  • Cephirux@lemmings.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    As long as the AI is capable enough, I don’t see what’s wrong with it, and I understand if Reddit decides to utilize AI for financial reasons. Though I don’t know how capable the AI is, and it is certainly not perfect, but AI is a technology and it will improve over time. If a job can be automated, I don’t see why it should not be automated.

    • HardlightCereal@lemmy.world
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      19
      ·
      1 year ago

      AI is often only trained on neurotypical cishet white men. What happens when a community of colour is full of people who don’t have the same conversational norms as white people, and the bot thinks they’re harassing each other? What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots? What happens when an AI is used to moderate a trans community, and flags everything as NSFW because its training data says “transgender” is a porn category?

      • Cephirux@lemmings.world
        link
        fedilink
        arrow-up
        14
        arrow-down
        7
        ·
        1 year ago

        I think it’s a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.

            • HardlightCereal@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              19
              ·
              edit-2
              1 year ago

              If your personality makes you sound like a bot, then you’re exactly the kind of person I’m talking about when I say that AI is going to ban real people for being spambots. I think you sound like a bot, and so will AI. I am capable of critical thinking and looking past first impressions, an AI is not.

              I think maybe you’re under the impression that computers run on perfect logic. Machine learning systems actually run on pure instinct. You are more capable of logical reasoning than an ML program is. You’re less capable than a traditional algorithmic program, but you’re more capable than an AI.

              • Cephirux@lemmings.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                3
                ·
                1 year ago

                I admit I might be biased towards AI because I believe AI isn’t biased because it doesn’t have any desire, to sleep, breath, eat, etc. Everyone is capable of critical thinking, the question is, is it good or not? And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one, although it may not be as good as some people.

                • 9point6@lemmy.world
                  link
                  fedilink
                  arrow-up
                  11
                  ·
                  edit-2
                  1 year ago

                  All AI has to be biased, that bias is the training data and (inherently biased) humans select the training set. Funnily enough, the weights on each node of a neutral net are even sometimes called biases!

                  If any AI wasn’t biased it would simply produce unintelligible garbage.

                • jungle@lemmy.world
                  link
                  fedilink
                  arrow-up
                  7
                  arrow-down
                  1
                  ·
                  edit-2
                  1 year ago

                  That’s not AI works. It’s exactly as biased as the humans who produced the content on which it is trained.

                  That said, I also don’t believe these models have been trained exclusively on white straight men’s conversations, that would take some effort to achieve.

                  More likely, it’s been trained on internet forums, so similar to what it’s being asked to moderate. And as long as there’s a human at the other end of an appeal, it should be fine.

                • HardlightCereal@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  arrow-down
                  2
                  ·
                  1 year ago

                  I’m a computer scientist, and I will tell you right now that AI is biased. Here’s how you train a neural network AI: you arrange a whole lot of neurons, you reinforce the connections between the neurons when it succeeds, and you weaken the connections when it fails. That’s the same way your brain works. When you eat food or have sex or do something else beneficial to survival, your neural connections are strengthened. An ANN AI is driven by its training directive just like you’re driven to eat or have sex. It develops the same biases.

                  And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one

                  This is nonsense. Humans invented the horse drawn wagon. Is a wagon ever going to develop critical thinking? No. AI isn’t a child with boundless potential, it’s a tool, just like a wagon. If humans want AI to have critical thinking, they’re going to have to build it. And no human has ever succeeded at that yet. The AI that Reddit is using does not have it. And since the AI is a profitable tool in its current state, it will probably not be improved to the level of a human.

                • AA5B@lemmy.world
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  All AI does is look for patterns to complete. You train it on some set of data such as Reddit, which can be biased, and set some sort of feedback for whether it makes the right choice, which can be biased, and find out what patterns it thinks it sees, which may be biased, to apply to new situations

          • Chaos@lemmy.world
            link
            fedilink
            arrow-up
            2
            arrow-down
            3
            ·
            edit-2
            1 year ago

            Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.

      • Lvxferre@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        AI is often only trained on neurotypical cishet white men.

        Can you back up this claim? Unless you’re just being an assumer, or you expect people to be suckers/gullible/“chrust” you.

        What happens when a community of colour is full of people who don’t have the same conversational norms as white people

        In this statement alone, there are not one but two instances of a racist discourse:

        1. Conflating culture (conversational norms) with race.
        2. Singling out “white people”, but lumping together the others under the same label (“people of colour”).

        You are being racist. What you’re saying there boils down to “those brown people act in weird ways because they’re brown”. Don’t.

        What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots?

        The reason why autists are often called “robotic” has to do with voice prosody. It does not apply to text.

        And the very claim that you’re making - that autists would write in a way that an “AI” would confuse them with bots - sounds, frankly, dehumanising and insulting towards them. And reinforcing the stereotype that they’re robotic.

        [From another comment] Did you write your comment with chatgpt?

        Passive aggressively attacking the other poster won’t help.


        Odds are that you’re full of good intentions writing the above, but frankly? Go pave hell back in Reddit, you’re being racist and dehumanising.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      The problem is the perverse incentives for “service”. Yes, ideally things that can be automated, should be. But what about when it’s insufficient, or can’t satisfy the customer, or is just worse service. Those cases will always exist, but will the companies provide an alternative?

      We’re all familiar with voice menus and chatbots to provide customer service, and there are many cases where those provide service faster and cheaper than a human could. However what we remember is how useless they were that one time, and how much effort it was to escape that hell to talk to someone who can actually help.

      If this AI is just better language recognition, or if it makes me type complete sentences, just to point me to the same useless FAQ yet again, I’ll scream

    • Lvxferre@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      As long as the AI is capable enough

      The model-based decision making is likely not capable enough. Specially not for the way that Reddit Inc. would likely use it - leaving it in charge of removing users and content assumed to be problematic, instead of flagging them for manual review.

      I’m specially sceptic on the claim in the site that their Hive Moderation has “human-level accuracy”. Specially over time - as people are damn smart when it comes to circumventing automated moderation. Also let us not forget that the human accuracy varies quite a bit, and you definitively don’t want average accuracy, you want good accuracy.

      Regarding the talk about biases, from another comment: models are prone to amplify biases, not just reproduce them. As such the model doesn’t even need to be trained only in a certain cohort to be biased.

  • orcrist@lemm.ee
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    1 year ago

    Automated spam detection has been around for decades. As trolls and spammers get more sophisticated, the technology to combat them will continue to evolve. I don’t see any new situation to be surprised or concerned about. Of course any kind of content moderation system can be implemented poorly, but that’s a different claim.

  • hightrix@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Is anyone surprised? I’d bet they are using ai powered bots to increase engagement and repost content.

    Sacrifice it all for that incompetently inept incoming IPO.

  • Grottyknight@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    My ten year old account was banned with no appeal for “report abuse”. I literally reported once, a post that was not marked nsfw with images of dead children. Go figure.

  • WiLiV@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    As horrible as that seems, at least the AI might be impartial and non-partisan when it comes to levying bans, unlike Reddit admins who will ban you even if you didn’t break any rules at all, as long as they disagree with your opinion.

  • ClarkDoom@lemmy.world
    link
    fedilink
    arrow-up
    23
    arrow-down
    49
    ·
    1 year ago

    Reddit mods are one of the few “jobs” that I’m perfectly fine with AI replacing. There’s absolutely no way AI could do a worse job than what’s already being done.

    • Annoyed_🦀 @monyet.cc
      link
      fedilink
      arrow-up
      34
      arrow-down
      2
      ·
      1 year ago

      AI: you’ve been banned for violating rule 1

      User: but i did not violating rule 1, i violating rule 34

      AI: my mistake. You’ve been banned for violating rule 34

      User: but there’s no rule 34

      AI: my mistake. You’ve been unbanned.

    • Pons_Aelius@kbin.social
      link
      fedilink
      arrow-up
      25
      arrow-down
      3
      ·
      1 year ago

      There’s absolutely no way AI could do a worse job than what’s already being done.

      I see that naive techno-optimism is alive and well in this day and age.

      • ClarkDoom@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        20
        ·
        1 year ago

        Reddit mods are some of the absolute worst people at their role of literally any role in existence. I’ll die on that hill and take all of your downvotes. They destroyed Reddit way before Spez made it official.

        • Pons_Aelius@kbin.social
          link
          fedilink
          arrow-up
          11
          arrow-down
          3
          ·
          edit-2
          1 year ago

          Did you step up to mod any communities?

          Did you create any new communities for the things you were interested in?

          Also, what data are you going to train the AI on? Becasue if they use the old mods as training data…

          And finally, most of the subs I visited had mods that I never had problems with. So maybe it is the subs you were visiting, or maybe, just maybe, you were the problem.

    • btaf45@lemmy.worldOP
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      1 year ago

      This doesn’t replace mods. It’s just another way for reddit admins to find ways to auto perma-ban users.

      • Melkath@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        My account, Melkath, got banned at least twice for horseshit reasons.

        Mostly me being outspokenly against naziism.

        It’s still live tho. Because I mostly use kbin now.

          • Melkath@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Yup. Don’t expect a response.

            Only reason I got a response/unban was because the mass exodus was happening.

            Nothing for 3 months, then traffic dropped and a day later I was reinstated.

    • Delphia@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      The AI can be trained however they want, and has no conscience. Mods might actually reach a whistleblowing tipping point.

      • ClarkDoom@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        12
        ·
        1 year ago

        Nah, just the few naive folks who think a handful of good mods makes up for the horde of insanely toxic ones. The percentage of crap Reddit mods is so high the role shouldn’t even exist. Perfect use of AI.