BranBucket, (edited )

What happens when, because it’s so quick and easy to churn out, 50% or more of the web is AI generated slush, which is then scraped and incorporated into the next generation of LLMs, which increases that percentage and in turn is then scraped, and so on, and so on?

How low can the quality of your training data drop before the results become intolerably bad? How do you raise the quality of that data without a massive investment of human labor? How much glue will be told to put on our pizza two years from now?

Generative AI could be a powerful tool, but even ignoring ethical considerations, this seems like a profoundly bad way to imement it.


but ai has electrolytes!

M0oP0o, avatar

It’s what plants crave!

Blaster_M, (edited )

I don’t just want AI news to fail, I want it to take the web-scraping trending post news bots down with it.

Bring investigative journalists back to news media.

ValueSubtracted, avatar



Been very impressed by the quality of reporting by 404 Media and they seem to be making it work financially so feeling cautiously optimistic!


noo, this one dude on twitter said the state of contemporary journalism is great

vk6flab, avatar

What will it take until people get it through their thick skulls that ChatGPT isn’t intelligent, doesn’t learn and is a tool that can only generate plausible gibberish.

Using the same tools to detect such gibberish will give you more gibberish.

Garbage in, Garbage out has been true since the difference engine, it’s just that today the garbage smells like English words, still garbage, but not knowledge, intelligence or anything like it.

The machine learning approach for building models, used to produce so called large language models like ChatGPT is also used to create weather forecasting models that are bigger, better and orders of magnitude faster than available until now.

The tools have changed life, but I’m unconvinced that it’s a suitable, sustainable or realistic way to create artificial intelligence, despite claims to the contrary.


Nothing, it seems close enough to most that they actually can’t think about it any other way apart from human.

scrubbles, avatar

People are so insistent that it’s ai that it all reminds me of Blockchain. It’s new! It’ll change everything!

It’ll change some things. What we are seeing now is business forcing it into everything when really, right now, there are only a handful of things it makes sense to use.

It’s really great at giving you a starting point a very rough outline of something. That is the easy part. The hard part is turning that into something new and coherent, and for that I think modern AI is nowhere close. That needs a human

ValueSubtracted, avatar

I think it’s definitely a bubble that will burst eventually.

At the same time, I don’t think there’s any way to put the toothpaste back in the tube. This technology is out there, and even once the hype has died down, we’re going to be dealing with it forever.


It is by definition AI

scrubbles, avatar

In the sense that AI is an extremely general term that involves many different technologies, yes. Generative AI/LLMs are not true AGI, which is what people think it is. It cannot think, it cannot learn, it can only predict.

snooggums, avatar

People think it AI intelligence is comparable to how a hovercraft hovers, as in the word is taken literally, but it is actually comparable to a Hoverboard.

scrubbles, avatar

That’s actually pretty good… the techbro equivalent of “We did it!”


It cannot think, it cannot learn, it can only predict.

That’s a distinction without a difference. If it can predict what a AGI would do in a given situation, then it is an AGI.

I’m not saying that it is an AGI, but the reason it it isn’t is more than “it can only predict”.


Nobody who’s not an engineer seems to give a shit - or, indeed, even understand - the nuance of LLM technology, or the technical reasons behind its limitations and the implications thereof. Hell, I know a lot of engineers who don’t care or understand it at a meaningful level.


And some of the engineering types are busy kissing the feet of people like Altman and Musk so they don’t get a chance to even notice.


I manage computing for a large university. One of my recently graduated students told me that he thought that technology just worked until he worked for me and saw the problems that come up. He was already a very tech-aware person and is going for a PhD in Infomatics, so if even he didn’t understand this, then what can we expect from the general public?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • meta
  • Macbeth
  • All magazines