The Hated One has been pretty solid in the past regarding privacy/security, imho. I found this video of his rather enlightening and concerning.

  • LLMs and their training consume a LOT of power, which consumes a lot of water.
  • Power generation and data centers also consume a lot of water.
  • We don’t have a lot of fresh water on this planet.
  • Big Tech and other megacorps are already trying to push for privatizing water as it becomes more scarce for humans and agriculture.

—personal opinion—

This is why I personally think federated computing like Lemmy or PeerTube to be the only logical way forward. Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O.

Of course, then the 'Net would be back to serving humanity instead of stock-serving megacultists. . .

  • chevy9294@monero.town
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    7 months ago

    Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O.

    That’s definitely not true, data centers are way more efficient than home servers. But yes, they use water to be more efficient.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 months ago

    We have an absolute shitton of fresh water on the planet. It’s just being horribly mismanaged most of the time.

    Once the AI gets rid of the pesky humans using it frivolously to do stupid things like “drink” or “bathe”, there will be plenty to go around.

    • umbrella@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      thats if ai ever gets “sentient” in our lifetimes like the suits keep insisting it will

  • MonkeMischief@lemmy.todayOP
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    7 months ago

    Also, I can’t even imagine how many resources image-generating AIs take up, especially when it’s all based around “refining prompts” over and over and over…

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      7
      ·
      7 months ago

      It’s a lot less than playing a video game. My fans on my GPU spin up harder and for more sustained time whenever I’m playing.

      • The Hobbyist@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        7 months ago

        I think the training part is not to be neglected and might be what is at play here. Facebook has a 350k GPU cluster which is being setup to train AI models. Typical state of the art models have required training for months on end. Imagine the power consumption. Its not about on person running a small quantized model at home.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          4
          arrow-down
          5
          ·
          7 months ago

          Such training can be done in places where there’s plenty of water to spare. Like so many of these “we’re running out of X!” Fears, basic economics will start putting the brakes on long before crashing into a wall.

          • The Hobbyist@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            You said running an imagine generating AI on your GPU is less demanding than a video game. While possibly true, the topic of water scarcity and energy demands are not about what one person runs on one GPU, hence my response.

            • Even_Adder@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              The person I replied to was only commenting on people how much it cost for people to “refine prompts” which isn’t where the problems lie. People at home on consumer hardware can’t be the ones causing issues at scale, which is what I pointed out. We weren’t talking about training costs at all.

              Besides, the GPU clusters models are trained on are far outnumbered by non-training datacenters, which also use water for cooling. It seems weird to bring that up as an issue while not talking about the whole cloud computing industry. I’ve never seen any numbers on how much these GPU clusters spend versus conventional use, If you have any I’d like to see them.

  • realharo@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 months ago

    With enough technological advances, they might be able to just switch to salt water.

    It’s silly to imagine all these “way out there” scenarios without also imagining progress in other areas.

  • morrowind@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O

    1. I mean this just isn’t true though, the big servers are more efficient. Scale means efficiency.

    2. The fediverse is also just more inefficient than a centralized service. Part of this is due to the design of activitypub, but part of it is the inherent inefficiency of any decentralized service compared to a centralized one

    3. Also this doesn’t happen in practice, most fediverse accounts are on servers running on rented cloud services, not people’s homelabs