Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week’s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    13 hours ago

    The Bookseller: Penguin Random House underscores copyright protection in AI rebuff

    Penguin Random House (PRH) has amended its copyright wording across all imprints globally, confirming it will appear “in imprint pages across our markets”. The new wording states: “No part of this book may be used or reproduced in any manner for the purpose of training artificial intelligence technologies or systems”, and will be included in all new titles and any backlist titles that are reprinted.

    Now that the content mafia has realized GenAI isn’t gonna let them get rid of all the expensive and troublesome human talent. it’s time to give Big AI a wedgie.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      It’s weird how rarely I see people point this, but in theory this kind of boilerplate should be technically meaningless. If copyright protections include the privilege to use the work for training a machine learning algorithm, you need explicit permission anyway. OTOH if it’s fair use or otherwise not something copyright law is concerned with, the copyright holder’s objection doesn’t matter.

      For the record, I think AI models are derivative works and thus they’re not only infringing on typical “all rights reserved” works, but also things such as Free software whose license terms require attribution if used in derivative work, and especially share-alike copyleft licensed work.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      12 hours ago

      Now that the content mafia has realized GenAI isn’t gonna let them get rid of all the expensive and troublesome human talent. it’s time to give Big AI a wedgie.

      Considering the massive(ly inflated) valuations running around Big AI and the massive amounts of stolen work that powers the likes of CrAIyon, ChatGPT, DALL-E and others, I suspect the content mafia is likely gonna try and squeeze every last red cent they can out of the AI industry.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 hours ago

      lmao incredible

      Then there’s BenevolentAI. I first wrote about them in 2018, as the company stated that it had “created a bioscience machine brain, purpose-built to discover new medicines and cures for disease.” How’s the machine brain doing these days? Well, the company’s lead program failed in the clinic last year, and in April announced major layoffs.

      just who buys this shit? this reads like refined crypto nonsense

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 hours ago

        People who don’t get much tech/overdose on hype and heard of alphafold. Imagine how much worse things could be now after the Nobel.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 minutes ago

            Given the past several decades of trickle-up economics, I think we haven’t seen anything close to the bottom of the well of basically-idle capital seeking unrealistic returns.

            I’d go so far as to say that the current crop of bubbles possibly represents the greatest downards wealth transfer (albeit from billionaires to millionaires rather than, y’know, working people) since the second world war, but I haven’t done anything near the amount of research necessary for that to be more than exciting rhetoric.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    13 hours ago

    Forget Gladwell

    All nonfiction writers can end up writing incorrect or controversial things, but why does every Gladwell book push half-formed and inaccurate theories? For years, my loose feeling about Gladwell was that he writes like someone who doesn’t care about being correct, which is not a way I would describe any other author I’ve encountered. There is something uniquely odd about his work.

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      15 hours ago

      They added sleeps to training jobs? Sounds like they deserve a raise for improving energy efficiency instead…

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        12 hours ago

        It’s almost completely ineffective, sorry. It’s certainly not as effective as exfiltrating weights via neighborly means.

        On Glaze and Nightshade, my prior rant hasn’t yet been invalidated and there’s no upcoming mathematics which tilt the scales in favor of anti-training techniques. In general, scrapers for training sets are now augmented with alignment models, which test inputs to see how well the tags line up; your example might be rejected as insufficiently normal-cat-like.

        I think that “force-feeding” is probably not the right metaphor. At scale, more effort goes into cleaning and tagging than into scraping; most of that “forced” input is destined to be discarded or retagged.

        • froztbyte@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 hours ago

          yeah this is the thing I’ve been thinking a lot about

          fucking reCaptcha is literally mass-weaponising users for data filtration, and there is no good counter besides just not using reCaptcha (which is something one can’t easily pull off without things like regulatory action, massive reputational problems that make people gtfo, etc)

          I have similar worries about cloudflare being such a massive chokepoint and using that position to enable “ai bot filter” services. feels extremely monopolistic, but ianal and I’m not entirely sure what the case grounds/structure on that would be (if any)

          the only other viable strategy at the moment is fully breaking contact with any potential bad traffic systems, and that’s extremely fucking dire because that’s yet another nail in the coffin of the increasingly less open internet

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            The whole Cloudflare bot detection is so weird and eerie. I’ve had issues where I can’t get past it presumably just because I’m using some in-application browser just to get a login cookie, but other times it just lets fucking curl through no questions asked.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        15 hours ago

        I saw people say they would add 10% opaque layers of the musk with Epstein’s accomplice (whos name i forgot for a second and too lazy to look her up) photo. Would be nice if there was a tool to do so automatically. (Not that i post on twitter anymore).

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          6
          ·
          14 hours ago

          tbh that sounds like a pretty easy script to write! Too bad I am not near a computer rn

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 hour ago

            I got nerd sniped into trying to resize felons_musk_and_maxwell.webp to the same size as some base image before compositing it on top with a 10% dissolve in the same magick invocation but I need to sleep so I’m giving up for now.

          • ShakingMyHead@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            13 hours ago

            Wouldn’t really need a script, though. Just open up photoshop or GIMP and add a layer after everything is finished.

            • Soyweiser@awful.systems
              link
              fedilink
              English
              arrow-up
              5
              ·
              10 hours ago

              But that doesn’t scale properly, you want ideally some sort of browser extension that just automatically does it for you before the data gets send to twitter.

    • luciole (he/him)@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 hours ago

      I thought they were gonna do that themselves by feeding on their own outputs littered all over the www. Maybe they can use some help.