Mind controlling a computer, is definitely required to make a metaverse a complete experience. But surgically infusing something into brain, I don't think this is the correct way.
Unfortunately because of the thickness of the skull I'm not sure there will ever be another way. Any signal we get from the outside is very attenuated compared to what we get by going in.
It could be a generational schism. You can imagine some parents starting to do this procedure if it's similar to cochlear implants, but most adults would refrain from doing it, and after a while all the youngsters communicate and think at their own super speed.
All things built on top of ChatGPT, "seems to me" as bullshit which simply are created to generate click bait, or with no future whatsoever.
The next AI big thing will be ChatGPT 5 or a competing model with less memory requirements.
IBM had the best class of thing back then, but they themselves do not knew what to do with it.
They were not able to create relevant use cases, other than demos which had no business value.
isn't it curve fitting at the end of the day. A multi parameter curve fitting ?
why do people say they don't now how it works. Yeah i get it that the cocktail is fairly complex, after training it on very huge dataset (all most all possible logical scenarios). But telling it we do not know how it works, seems like just adding mysticism to it, which attracts "clicks", but is not an honest description.
"Curve Fitting" is the objective, the function encoded in the weights is the solution, and not actually well understood. See work from Anthropic[1] and Google[2] that explores this.
As an analogy consider applying the same argument to the AlphaGo value function. It's "just" fitting a bunch of curves to the statistics of millions of self-played games. However, to effectively capture those statistics the network needed to develop a bunch of heuristics. Needless to say these heuristics are not understood (else we'd already know the principles needed to play at AlphaGo's level), and are not just exhaustive lists of statistical trends but more like strategies[3].
Recent work[4] strongly suggests that "grokking" (a striking but not unnatural[5] form of generalization) involves networks transitioning from memorized statistics/solutions to a general solution. The curve fitting perspective would totally miss all this for a comfortable but misleading story: "the objective is curve fitting so it's just interpolating data points".
Would "it's curve fitting by building an internal representation to better describe all the curves seen so far" be a better layperson-ish analogy in your opinion?
Depending on how the model is set up, we'd say 'set of basis functions', 'language', 'strategy'.
Even assuming that curve fitting is actually in any way a meaningful description, let’s say I give you that. It tells us absolutely nothing about the mechanisms evolved in the neural pathways, how it encodes memory if the current game distinctly from memory of past games, the reasons for the strength or weaknesses of one trained instance against another, or ways we could optimise the architecture to better complement the way it function. It doesn’t help us engineer the system, or reason about its possible limitations or failure modes. In other words it doesn’t tell us anything actually useful about it.
well for me thay have already replaced, my first preferences. I used to google or go to stack overflow for resolving issues and woul need hours to come to conclusions. Now with ChatGPT i simpy start my research from there.
If it swims like a duck, and it quacks like a duck, then it is good enough to be a Duck.
Well you can argue, ducks lay egg too, then we need to solve and code that too.
Its always better than previous. No body is creating life here, but its an attempt to derive intelligence, and seeing it come to this point seems we are so far right on track and quite far from where it started.