What will it take until people get it through their thick skulls that ChatGPT isn’t intelligent, doesn’t learn and is a tool that can only generate plausible gibberish.
Using the same tools to detect such gibberish will give you more gibberish.
Garbage in, Garbage out has been true since the difference engine, it’s just that today the garbage smells like English words, still garbage, but not knowledge, intelligence or anything like it.
The machine learning approach for building models, used to produce so called large language models like ChatGPT is also used to create weather forecasting models that are bigger, better and orders of magnitude faster than available until now.
The tools have changed life, but I’m unconvinced that it’s a suitable, sustainable or realistic way to create artificial intelligence, despite claims to the contrary.
People are so insistent that it’s ai that it all reminds me of Blockchain. It’s new! It’ll change everything!
It’ll change some things. What we are seeing now is business forcing it into everything when really, right now, there are only a handful of things it makes sense to use.
It’s really great at giving you a starting point a very rough outline of something. That is the easy part. The hard part is turning that into something new and coherent, and for that I think modern AI is nowhere close. That needs a human
I think it’s definitely a bubble that will burst eventually.
At the same time, I don’t think there’s any way to put the toothpaste back in the tube. This technology is out there, and even once the hype has died down, we’re going to be dealing with it forever.
In the sense that AI is an extremely general term that involves many different technologies, yes. Generative AI/LLMs are not true AGI, which is what people think it is. It cannot think, it cannot learn, it can only predict.
People think it AI intelligence is comparable to how a hovercraft hovers, as in the word is taken literally, but it is actually comparable to a Hoverboard.
Nobody who’s not an engineer seems to give a shit - or, indeed, even understand - the nuance of LLM technology, or the technical reasons behind its limitations and the implications thereof. Hell, I know a lot of engineers who don’t care or understand it at a meaningful level.
I manage computing for a large university. One of my recently graduated students told me that he thought that technology just worked until he worked for me and saw the problems that come up. He was already a very tech-aware person and is going for a PhD in Infomatics, so if even he didn’t understand this, then what can we expect from the general public?
What happens when, because it’s so quick and easy to churn out, 50% or more of the web is AI generated slush, which is then scraped and incorporated into the next generation of LLMs, which increases that percentage and in turn is then scraped, and so on, and so on?
How low can the quality of your training data drop before the results become intolerably bad? How do you raise the quality of that data without a massive investment of human labor? How much glue will be told to put on our pizza two years from now?
Generative AI could be a powerful tool, but even ignoring ethical considerations, this seems like a profoundly bad way to imement it.