It's becoming much harder to determine on a daily basis what content is original, thought-out by a person, and trustworthy. Ironically, verifiably-old content is easier to trust now. Examples from recent personal experience:
1) Some time ago I was searching for growing information about a specific and uncommonly-grown plant, and was led to a top-ranked website with long pages containing everything about it, including other plants. Surprised at how prolific the writing was, I spent more than an hour on the website, taking notes, etc. Every few paragraphs it would include an amazon affiliate link to something topical, which I thought was fair. Until I realized that the links near the bottom of the page were looking more random. Then it hit me, the website is all AI-generated, and the affiliate links themselves are also AI-chosen. And everything new I "learned" from that site was now useless because I had no way to know what was grounded in actual agricultural experience and what was hallucinated.
2) Recently I did a youtube search for a book I had just finished reading, looking for some reviews. Came across a channel that was reading the book as new audio (i.e. not the original published audiobook). I thought it was a fan making it. The voice was beautiful, soothing, and natural with all kinds of relevant emotions correctly included. I started listening to the book again, until I noticed a consistent error in word ordering being made every few lines. Then it hit me! The channel even included one upload with a video recording of a seemingly-real person reading with that voice. Both the audio and video are AI-generated, but very hard to tell.
3) Next to those videos, YT recommended many strange/new channels. One had the photo and the exact voice of a famous (and now very old) physicist, with tens of clickbaity titles about controversial topics in the domain. The only tell was that the voice was too vigorous and consistently energetic, while if you've listened to that physicist before, you know his cadence is slower. At first I thought maybe the channel is reading one of his books; no, the content itself was AI-generated, maybe based on his books. There was a lot of engagement, with many comments like "mind blown" and "learned so much today".
Both #1 and #3 are harmful, because you think you're learning from a reliable source but you end up learning hallucinated nothings. #2 I didn't mind much, still enjoyed the new voice, and even preferred it over my original audible version.
Something I've recently started seeing, maybe even an emerging #4 is AI generated translations. You could have someone very intelligent, making well written subject matter expertise. Or just someone who has valid thoughts they wish to express to the world in a language more of a common tongue than their own.
Or on the other end you could have someone who wrote a sentance or two in their language and had some combination of AI generation and translation algorithm bloat it out.
In both cases you will get something that can look right and well thought out or explained, but probably will have at least some of the AI slop signs present. I don't know what the solution is for this type given claims Google Translate has started to do this kind of translation for people. An AI translation is probably just as prone to hallucinations as any other AI, but it probably will look more natural to readers than a direct translation.
You're making the classic mistake of looking for a trustworthy information source and then trusting it, instead of focusing on whether the information itself is trustworthy regardless of source. It's literally the same as my grandma saying "they said so on TV, therefore it must be true" while completely dismissing anything I've read on the internet because reasons.
If you develop the skill of judging information by its merit rather than source, you won't mind AI-generated content as long as it's helpful.
I talk to LLMs a lot. It's fucking great. Do I take everything they say at face value? No. But neither do I take at face value things that biological intelligence outputs.
Information itself cannot be trustworthy. It can be right, it can be wrong, or it can be somewhere in between. Only a source can have trustworthiness, as it's a mixed measure of reputation and provable accuracy.
You filter out known untrustworthy sources to not waste your time verifying false information 100x more than you need to. I know The Onion is a satire publication. I do not need to verify its claims. It's an intentionally untrustworthy source. I know that LLMs can hallucinate information, so I verify with a more trustworthy source. I cross-reference things random people say on the internet, because random people on the internet are not, individually, trustworthy sources of information.
If a rocket engineer explains to me why Rocket A isn't flight ready, I'm more inclined to believe them than if a random commenter on the internet explains it to me. Because the one source is more trustworthy than another, and if I wanted to verify the claim myself I'd have to spend a lot of time studying rocket science.
No it's not the same as your grandma. The point is that it's now more expensive to find the correct information to learn from. You don't know it's an LLM ahead of time, and you may spend hours until you figure out something is off. Hence why reputable sources will become more valuable.
> If you develop the skill of judging information by its merit rather than source..
Did you read example #1? I'm not talking about some piece of code from an LLM that you can verify or some political opinion that you can take with a grain of salt, but information that you can only gain and/or judge through expertise:
If you're not a physicist yourself, you can't judge "information by its merit" on specific physics topics, because you don't have a solid baseline.
Similarly, in growing plants, each plant has its own peculiarities, and only people experienced in growing it can tell you anything useful - it's knowledge accumulated by trial and error. Not knowledge that your "great discerning mind" can assess on its own. Even a botanist can't tell you the ideal growing conditions of a plant that they've never studied before.
What if your physics book is wrong because knowledge has advanced since it was released - you can still find lots of publications and people with degrees blissfully unaware of Hawking Radiation. What if your botanical book is wrong because facts have changed since then - climate is changing and so does flora. What if your book is wrong because it's state-funded propaganda mixed with petty fights of a bunch of people with suits and strong opinions disguised as academia - a huge chunk of linguistics is dealing with exactly this issue.
Again, you seem to miss the point that the idea of questioning new information, which was already useful to navigate life before LLMs, before television, before newspaper, before print, before clay tablets, even before speech itself, is equally applicable to LLMs as to any other form of communication. You just need to upgrade your strategies a little and that's it. Don't blow this out of proportion "somebody gasp lied to me on the internet!".
There’s a lot of things where this just doesn’t work. I was wrong about a lot of business strategy things when I was younger, to the point where I rejected what I now see were correct arguments against my view of things. How could I have gotten out of that trap without the ability to find trustworthy sources?
Well, if not disclosed you could assume that somebody did due diligence for you, and could include sources. I don't even trust LLM even if all the information is included in the context window if I need reliable information. Trying to make money on slop is really bad manners. It's a scam, you can't call it otherwise.
Btw, I like AI, it did a ton of value for me. We just need to find a way to live with it, without getting doomed in misinformation.
You do ultimately need to trust some sources to some degree. You can try to cross-correlate multiple sources (and this is in general a good habit!), but that depends on some level of trustworthiness in the sources you are looking at, you're not at all immune to misinformation by doing this (especially if multiple sources are, undisclosed, being generated from the same LLM. You can also get citenogenesis even pre-LLMs). And of course for some things it's possible to try to verify directly yourself, but this is infeasible to do for everything you depend on.
I feel for you. I was looking for some wildlife events on Youtube, only to find that all of them were AI generated, trying to get views. I can only find content somehow reliable if I put filter for content before of AI era.
Humans are also unreliable, we are competing for scarce attention, platforms decide what gets visibility and we cater to their algorithms. You could say humans are prompted by feed ranking AI - what and how to publish.
Sure! Who knows what's on that concrete curb! I'm clean if nothing else with my kitchenware. Just pass it over my natural gas stove's burner. Treat it the same as you do your pocket knife before/after you remove a bullet or shot from a wound. Sterile is best!
> "We know that richer communities and schools will be able to afford more advanced AI models," Winthrop says, "and we know those more advanced AI models are more accurate. Which means that this is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
Maybe AI should be a public service provided by the state, like education is? This will at least partially solve the issue of AI-access inequality. Personally, I wouldn't mind if EU provided such a service for its residents and citizens. It can also be more aligned with core EU values than offerings from grifter megacorps. Of course this would also require the usual checks and balances, as any concentrated power does.
What's up with the TeamYouTube account advising him to delete his X post for security reasons because the post contains a channel ID? Like channel ID is not public information and some secret private key or something?
In a discussion of an article about encouraging fact-checking in writing, I wish you would have made your quotes informative by replacing "many wise people" with the actual names of who said them.
For everyone else: the first paragraph appears to be a quote of C.S. Lewis around 1945 [0], and the second, of Thomas Jefferson in 1807 [1].
> Its landing module, which weighs 495 kilograms (1,091 lb), is highly likely to reach the surface of Earth in one piece as it was designed to withstand 300 G's of acceleration and 100 atmospheres of pressure.
Awesome! I don't know how you can design for 300 G's of acceleration!
Overbuild everything. For things that might be fragile-ish like surface mounted electronics, cast the whole thing in resin. As a sibling poster has mentioned, we shoot things out of artillery tubes these days that have way harsher accelerations than 300g.
300g is nuts. Electronics in a shell is one thing, this is a landing craft. In a prior life my designs had to survive 12g aerial drop loads and we had to make things pretty robust.
It also blew my mind that a human being, John Stapp, survived >40g acceleration and 26g deceleration, in a rocket sled. I believe it was the deceleration that hurt him the most.
Gun scopes are minimum 500G rated. Apparently that's the ballpark for recoils(the reaction force from the barrel becoming a rocket engine, and/or the bolt/carrier bottoming out)
Acceleration is a vector. So if you apply the “deceleration” long enough you’ll eventually be accelerating in the opposite direction. Without a frame of reference it’s all the same. Even with a frame of reference you’re still accelerating just that it’s in he opposite direction of the current velocity.
I fly through trams in completely different directions depending on whether it accelerates or decelerates. So for sure a system's design must consider more than just the magnitude of acceleration.
When you go around a tight corner and are thrown to one side, what term would you use for the tram's change in motion then?
Deceleration is a useful but non-technical term, like vegetable. A tomato is a fruit which is a tightly defined concept, but it is in this loose category of things called vegetables. It's still useful to be able to call it a vegetable.
From a physics perspective all changes in motion (direction and magnitude) are acceleration, and it's correct to say the designers had to consider acceleration in most (all?) directions when designing the tram. This is including gravity's, as they tend to give you seats to sit on, rather than velcro panels and straps like on space ships.
It is useful to say to your friend in the pub that you got thrown out of your seat due to the tram's heavy deceleration, rather than give a precise vector.
Without looking out the window how would you tell the difference between acceleration or deceleration? You can’t.
And if you say “well one way I fly to the back of the tram and the other the front” You’re arbitrarily associating “front” with decelerate and “back” with accelerate.
300gs is 300gs regardless of the direction vector of the component.
> So for sure a system's design must consider more than just the magnitude of acceleration.
What else would you need to consider? Acceleration up? Down? Left? 20%x,30%y,40%z? There’s an infinite number of directions.
Well to be fair, the person you reply to has a point. There’s a continuous range of directions, but even though I’m no spaceship engineer, I suspect they’re probably engineered to withstand acceleration better in some directions than others, given that pretty much only their thrust method, as well as gravity at source and destination, will actually be able to apply any acceleration.
Acceleration, deceleration, point is: Something is going to apply 300 gs in a certain direction to design for.
It's not like you can tell whether you're going slow or fast, in one direction, the other direction, or even just standing still, if you close your eyes.
There's no need for the "/s" on the end, there. Deceleration, and especially in this case with a natural frame of reference, deceleration is negative acceleration.
The magnitude of the velocity vector is dependent on the frame of reference.
If you measure the same object's velocity from a spaceship traveling through the solar system, you'll get a different answer from what we measure from Earth.
That's why physics doesn't distinguish between acceleration and deceleration. What looks like acceleration in one frame looks like deceleration in a different frame.
1) Some time ago I was searching for growing information about a specific and uncommonly-grown plant, and was led to a top-ranked website with long pages containing everything about it, including other plants. Surprised at how prolific the writing was, I spent more than an hour on the website, taking notes, etc. Every few paragraphs it would include an amazon affiliate link to something topical, which I thought was fair. Until I realized that the links near the bottom of the page were looking more random. Then it hit me, the website is all AI-generated, and the affiliate links themselves are also AI-chosen. And everything new I "learned" from that site was now useless because I had no way to know what was grounded in actual agricultural experience and what was hallucinated.
2) Recently I did a youtube search for a book I had just finished reading, looking for some reviews. Came across a channel that was reading the book as new audio (i.e. not the original published audiobook). I thought it was a fan making it. The voice was beautiful, soothing, and natural with all kinds of relevant emotions correctly included. I started listening to the book again, until I noticed a consistent error in word ordering being made every few lines. Then it hit me! The channel even included one upload with a video recording of a seemingly-real person reading with that voice. Both the audio and video are AI-generated, but very hard to tell.
3) Next to those videos, YT recommended many strange/new channels. One had the photo and the exact voice of a famous (and now very old) physicist, with tens of clickbaity titles about controversial topics in the domain. The only tell was that the voice was too vigorous and consistently energetic, while if you've listened to that physicist before, you know his cadence is slower. At first I thought maybe the channel is reading one of his books; no, the content itself was AI-generated, maybe based on his books. There was a lot of engagement, with many comments like "mind blown" and "learned so much today".
Both #1 and #3 are harmful, because you think you're learning from a reliable source but you end up learning hallucinated nothings. #2 I didn't mind much, still enjoyed the new voice, and even preferred it over my original audible version.
reply