𝙲𝚑𝚊𝚒𝚛𝚖𝚊𝚗 𝙼𝚎𝚘𝚠

  • 2 Posts
  • 504 Comments
Joined 1 year ago
cake
Cake day: August 16th, 2023

help-circle

  • Mostly the whole exploitation part, which often goes too far and can be very humiliating or outright dangerous for participants. Some scrapped videos supposedly amounted to torture. Then there’s the rigged giveaways and the fact that the dude just unnerves a lot of people because he, as this post demonstrates, doesn’t smile with his eyes.

    There were also allegations that one of his colleagues was a sex offender but as far as I know everyone including the purported victim ended up denying it, so I don’t know how much people care about that still.


  • I won’t pretend I understand all the math and the notation they use, but the abstract/conclusions seem clear enough.

    I’d argue what they’re presenting here isn’t the LLM actually “reasoning”. I don’t think the paper really claims that the AI does either.

    The CoT process they describe here I think is somewhat analogous to a very advanced version of prompting an LLM something like “Answer like a subject matter expert” and finding it improves the quality of the answer.

    They basically help break the problem into smaller steps and get the LLM to answer smaller questions based on those smaller steps. This likely also helps the AI because it was trained on these explained steps, or on smaller problems that it might string together.

    I think it mostly helps to transform the prompt into something that is easier for an LLM to respond accurately to. And because each substep is less complex, the LLM has an easier time as well. But the mechanism to break down a problem is quite rigid and not something trainable.

    It’s super cool tech, don’t get me wrong. But I wouldn’t say the AI is really “reasoning” here. It’s being prompted in a really clever way to increase the answer quality.


  • China/russia/middle east not allowing it, is not the same as not being available. Did you even check the coverage map before replying.

    So can you use it or is it not available then? And yes, I checked that map, where else do you think I got the list from??

    Astronomers complain about light bleed from ground cities as well. No one was telling them to shut down the cities.

    People claim we should turn down city lights all the time! Under what rock have you been living? But for city light bleed, astronomers have an alternative solution, simply place the telescope somewhere not near the cities. And yes, whenever a city tends to grow near one of those telescopes astronomers do kick up a fuss about it.

    If you fill LEO with thousands of sattelites, there’s nothing astronomers can do about that.

    Lol no just no… I dont know where you live but the majority of people in rural areas are not served, otherwise starlink would have never taken off and been sustainable.

    I don’t know where you live, Mars perhaps?

    https://nl.m.wikipedia.org/wiki/Bestand:InternetPenetrationWorldMap.svg

    Clearly shows most of the Earth has internet access. Or do you think the US has no rural areas? They’re still above 90% somehow. Oh wait, I know, they must be using those mythical internet-via-sattelite services that existed well before Starlink did! I wonder where you’d find a mythical creature like the Viasat-1 for example.

    Starlink took off because they promise higher speeds than some ISPs and most other sattelite companies do at lower cost, not because they’re your only option. Starlink has 3 million customers, which makes them the size of a small ISP.

    Again this myth you keep spouting that the majority of the world has access is bullshit

    Except for the fact that the data backs me up.

    planes exist but you need to walk because you live to far from the airport is some classist bullshit.

    Continuing your analogy, you propose demolishing the local university because people are entitled to fly to Ibiza, or their local supermarket. Or something, it’s not like it made much sense anyway.

    You still completely failed to address the main point, that universal high-speed internet access is not critical for most of the world, certainly not for areas that have always managed perfectly fine without, and that filling up LEO is a disaster for astronomists that they don’t have a workaround for. If you’re not going to actually argue that point I think we’re done here.


  • Starlink doesn’t cover the globe, it’s available in the Americas, Europe and Oceania. It’s not available in most of Africa, the Middle East, India, China, Russia, Indochina. E.g. the majority of the world cannot access Starlink.

    I don’t give a shit that Starlink is owned by Musk. Starlink as a company seems fine (it’s not X or anything), but I strongly dislike that their product messes with astronomy in such a major way that astronomists complain about it every chance they get.

    You know that some of us are 10miles from town and considered rural? And the big Telecoms refuse to run broadband for us?

    Sounds like your fight is with “big telecom” and with your local government for not putting up a good enough quote to run fiber. This isn’t an issue for large portions of the world, including rural areas, where they’ve figured out how to get them to lay fiber.

    Internet access for a long time has been pushed as a priority and should be treated as a utility and that everyone should have access to it.

    Access is not the same as high-speed access. Almost all of the world has some level of access, even in rural areas, through sattelites that are not in LEO. Enough to (slowly) browse, not enough to stream in HD. I don’t believe sacrificing considerable astronomical discoveries and progress is remotely worth it when feasible alternatives are available and have been used in large areas of the world already.






  • It’s not a direct response.

    First off, the video is pure speculation, the author doesn’t really know how it works either (or at least doesn’t seem to claim to know). They have a reasonable grasp of how it works, but what they believe it implies may not be correct.

    Second, the way O1 seems to work is that it generates a ton of less-than-ideal answers and picks the best one. It might then rerun that step until it reaches a sufficient answer (as the video says).

    The problem with this is that you still have an LLM evaluating each answer based on essentially word prediction, and the entire “reasoning” process is happening outside any LLM; it’s thinking process is not learned, but “hardcoded”.

    We know that chaining LLMs like this can give better answers. But I’d argue this isn’t reasoning. Reasoning requires a direct understanding of the domain, which ChatGPT simply doesn’t have. This is explicitly evident by asking it questions using terminology that may appear in multiple domains; it has a tendency of mixing them up, which you wouldn’t do if you truly understood what the words mean. It is possible to get a semblance of understanding of a domain in an LLM, but not in a generalised way.

    It’s also evident from the fact that these AIs are apparently unable to come up with “new knowledge”. It’s not able to infer new patterns or theories, it can only “use” what is already given to it. An AI like this would never be able to come up with E=mc2 if it hasn’t been fed information about that formula before. It’s LLM evaluator would dismiss any of the “ideas” that might come close to it because it’s never seen this before; ergo it is unlikely to be true/correct.

    Don’t get me wrong, an AI like this may still be quite useful w.r.t. information it has been fed. I see the utility in this, and the tech is cool. But it’s still a very, very far cry from AGI.