Hacker Newsnew | past | comments | ask | show | jobs | submit | klft's commentslogin

.... sometimes later

Eve: Will Bob ever propose?

ChatGPT: Based on the search results, Bob will propose during his trip with Alice to Scotland next year!

Eve: wtf?


Whisper ist used for speech-to-text conversion. Not to generate the text.


It's still AI generated text that is not in any way guaranteed to be correct or accurate.


Its accuracy can be and is quantified.


Would a SQL optimizer use a generic solver as described here or are there special algorithms for such problems?


Now you can add cloud resiliency to YOUR resume.

Win-win.


I did it as cloud optimization, but at least have receipts for that (and have done more notable endeavors in such as well)


ChatGPT NT


> Also, I read a lot of people here saying this is about engineers vs scientists…I believe that people don’t understand that Data Scientists are full stack engineers

It is about scientists as in "let's publish a paper" vs. engineers as in "let's ship a product".


The Century of the Self is a BBC documentation about "... how those in power have used Freud's theories to try and control the dangerous crowd in an age of mass democracy." (1)

(1) https://en.wikipedia.org/wiki/The_Century_of_the_Self


That's a must watch. Adam Curtis' work is brilliant.


Pope's infallibility vs. Python interpreter.


Well, you need infallibility if you're going to code in Python.


from the linked article:

> people and organisations alike tend to be judged by the worst thing they do


GPT-4 (I haven't really tested other models) is surprisingly adept at "learning" from examples provided as part of the prompt. This could be due to the same underlying mechanism.


I’ve found the opposite in trying to get it to play Wordle. It’ll repeatedly forget things it’s seemingly learned within the same session, all the while confident in its correctness.


LLMs are trained on 'tokens' derived from 'words' and 'text' and even though there are tokens that are just one letter the bulk is a rough approximation to syllables as though you're trying to create a dictionary to be used for data compression.

It might be more effective to try to play 'tokendle' before trying to play 'wordle'.


Do you know whether LLMs grasp the equivalence of a word expressed as one whole-word token and as a series of single character tokens that spell out the same word? I'm curious if modifying the way some input words are split into tokens could be useful for letter-by-letter reasoning like in Wordle.

Or would an LLM get confused if we were to alter the way the tokenization of the input text is done, since it probably never encountered other token-"spellings" of the same word?


From what I understand it is anything goes, it could be letters or it could be a whole word or even a sentence fragment or a concept ('The United States of America'). Think of it as the dictionary for a compression algorithm and you wouldn't be too far off.

https://www.geeksforgeeks.org/lzw-lempel-ziv-welch-compressi...

For 'code table' substitute 'token table'.


What approach are you using to get the LLM to split words into individual letters?


Not really. That's called few shot learner.

It's basically unrelated to what happens during training, which is using gradients.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: