OpenAI released draft guidelines for how it wants the AI technology inside ChatGPT to behave—and revealed that it’s exploring how to ‘responsibly’ generate explicit content.
I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.
Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.
The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.
But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then inventshallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.
The issue is not the method of production, it is who controls it.
I can see where you’re coming from - however I disagree on the premise that “the reality is that (rationale) the control of AI is in the hands of the mega corps”. AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i’ve used several to make my own AI models)
Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal ‘features’ to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There’s also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.
I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.
I agree with this principle, however the reality is that given the massive computational power needed to run many (but not all) models, the control of AI is in the hands of the mega corps.
Just look at what the FAANGs are doing right now, and compare to what the mill owners were doing in the 1800s.
The best use of LLMs, right now, is for boilerplating initial drafts of documents. Those drafts then need to be reviewed, and tweaked, by skilled workers, ahead of publication. This can be a significant efficiency saving, but does not remove the need for the skilled worker if you want to maintain quality.
But what we are already seeing is CEOs, etc, deciding to take “a decision based on risk” to gut entire departments and replace them with a chat bot, which then
inventshallucinates the details of a particular company policy, leading to a lower quality service, but significantly increased profits, because you’re no longer paying for ensured quality.The issue is not the method of production, it is who controls it.
I can see where you’re coming from - however I disagree on the premise that “the reality is that (rationale) the control of AI is in the hands of the mega corps”. AI has been a research topic not done solely by huge corps, but by researchers who publish these findings. There are several options out there right now for consumer grade AI where you download models yourself, and run them locally. (Jan, Pytorch, TensorFlow, Horovod, Ray, H2O.ai, stable-horde, etc many of which are from FAANG, but are still, nevertheless, open source and usable by anyone - i’ve used several to make my own AI models)
Consumers and researchers alike have an interest in making this tech available to all. Not just businesses. The grand majority of the difficulty in training AI is obtaining datasets large enough with enough orthogonal ‘features’ to ensure its efficacy is appropriate. Namely, this means that tasks like image generation, editing and recognition (huge for medical sector, including finding cancers and other problems), documentation creation (to your credit), speech recognition and translation (huge for the differently-abled community and for globe-trotters alike), and education (I read from huge public research data sets, public domain books and novels, etc) are still definitely feasible for consumer-grade usage and operation. There’s also some really neat usages like federated tensorflow and distributed tensorflow which allows for, perhaps obviously, distributed computation opening the door for stronger models, run by anyone who will serve it.
I just do not see the point in admitting total defeat/failure for AI because some of the asshole greedy little pigs in the world are also monetizing/misusing the technology. The cat is out of the bag in my opinion, the best (not only) option forward, is to bolster consumer-grade implementations, encouraging things like self-hosting, local operation/execution, and creating minimally viable guidelines to protect consumers from each other. Seatbelts. Brakes. Legal recourse for those who harm others with said technology.