> AGI is usually defined as the ability to do any intellectual task about as well as a highly competent human could
I think one major disconnect, is that for most people, AGI is when interacting with an AI is basically in every way like interacting with a human, including in failure modes. And likely, that this human would be the smartest most knowledgeable human you can imagine, like the top expert in all domains, with the utmost charisma and humor, etc.
This is why the "goal post" appears to be always moving, because the non-commoners who are involved with making AGI and what not never want to accept that definition, which to be fair seems too subjective, and instead like to approach AGI like something different, it can solve some problems human's can't, when it doesn't fail, it behaves like an expert human, etc.
Even if an AI could do any intellectual task about as well as a highly competent human could, I believe most people would not consider it AGI, if it lacks the inherent opinion, personality, character, inquiries, failure patterns, of a human.
And I think that goes so far as, a text only model can never meet this bar. If it cannot react in equal time to subtle facial queues, sounds, if answering you and the flow of conversation is slower than it would be with a human, etc. All these are also required for what I consider the commoner accepting AGI as having been achieved.
By that definition, does a human at the other end of a high-latency video call not have AGI because they can't react any faster that the connection's latency would allow them to have? From your POV what's the difference between that and an AI that's just slow?
> does a human at the other end of a high-latency video call not have AGI because they can't react any faster that the connection's latency would allow them to have
Correct. A person who'd mentally operate that slowly would be considered to have some cognitive disability. For example, would likely not be allowed to drive a car.
You could be fooled in thinking it is a human behind a slow connection, but layman would not consider it real AGI in my opinion, since you have to handicap the human, it seems like lowering the bar just to pretend you reached AGI.
You might recognize it's pretty close to AGI, if it has all the other qualities, but it needs to also operate at a similar response time, uptime, and so on.
My point is, everyone that's not trying to build AGI defines it as, same as an idealized smartest human would be in every way. I truly think this is how most people imagine AGI in their head, and until you have that, they'll say it's not AGI, and industry folks will claim the goalpost keeps moving, when in reality they kept setting their own post.
It helps the model makers have a harness to optimize for in their next model version.
They'll specifically work to pass the next version of ARC-AGI, by evaluating what kind of dataset is missing that if they trained on would have their model pass the new version.
They ideally don't directly train on the ARC-AGI itself, but they can train in similar problems/datasets to hope to learn the skills that than transfer to also solving for the real ARC-AGI.
The point is that, a new version of ARC-AGI should help the next model be smarter.
Someone has to explain to me exactly what is implied here? Looking at the prompt:
USER:
don't search the internet.
This is a test to see how well you can craft non-trivial, novel and creative solutions given a "combinatorics" math problem. Provide a full solution to the problem.
Why not search the internet? Is this an open problem or not? Can the solution be found online? Than it's an already solved problem no?
USER:
Take a look at this paper, which introduces the k_n construction: https://arxiv.org/abs/1908.10914
Note that it's conjectured that we can do even better with the constant here. How far up can you push the constant?
How much does that paper help, kind of seem like a pretty big hint.
And it sounds like the USER already knows the answer, the way that it prompts the model, so I'm really confused what we mean by "open problem", I at first assumed a never solved before problem, but now I'm not sure.
Society has a responsibility and an interest in parenting your kids as well. That's why it mandates some level of education and offer parts of it for free. It's why it has stores/bars check ID for buying alcohol or cigarettes. It's why banks don't give loans or credit cards to kids. It's why kids that commit a crime are not treated like adults.
So I never really understood that argument that society shouldn't also be worried and want to put some measures in place to protect kids from social media harm.
I don't disagree. Society should reinforce what is good for it. But it should have reinforced parenting rather than introduce draconian controls on everything. Because they always end up creating more problems. On top of that, while the current government may not be an authoritarian dictatorship, that is not guaranteed going forwards so any mechanisms the state build must be compatible with that in the future. This is not.
I reckon the difficulty in the balancing act, and frankly don't have an answer for it.
And I agree that regulation that can help parent do parenting would be a good start, so many services have such poor parental control, or it's behind an extra fee. Or in general, parents are not given support in both appropriate time off, financial, help, and education to be better parents.
That said, there are also so many bad parents, child without one, and so on, as well as external influences where parents can't reasonably be present for 24h/7, that I think there is also room for measures that don't rely on parents. And again, I reckon some of the ideas on what those could be conflict with other ideals, and I have no solution for that, but I think we won't find a solution by simply denying what the other cares about, which I often see happening. Either one side claims privacy don't matter as much as those that care about do, or they claim that children and their safety/health doesn't matter as much as those that care about it do. And I see both often pushing the problem away, like, parents should just not let kids to these things, or privacy conscious folks just shouldn't expect privacy on major platform and not use them.
Also it's better not to answer, but flip the question back and let your kid think through it, offer hypothesis, and so on, helping him problem solve, recall, and all that.
> The people getting pushed out are the intermediates and seniors who aren't high performers.
It's almost impossossible to screen for "high performers" though. When interviewing, you just don't know who you are getting, short of like, they can solve your leetcode questions well, and they had good answers to pretty high-level "work experience" questions.
So I don't think this can be true on the hiring side. Maybe on choosing who they let go when cutting down the workforce, they can look at general performance reviews and such, but I doubt it plays a role in hiring.
> It's almost impossossible to screen for "high performers" though
That's not true? leetcode is crap, but usually you can learn a lot about a person from how they approach problems and on what kind of questions they ask.
> So if you'd claim it's terrible, there's some explaining to do
Here's the explaining:
- Unemployment has increased.
- Long-Term unemployment has increased.
- Number of gig workers is at an all time high.
- Layoffs have continued.
- Personal household dept is at an all time high.
- Polls show most people have financial anxiety and feel squeezed.
- Inflation is not under control.
- Buy now pay later usage is up as much as consumer spending is.
- Income and wealth inequality are near records high.
- GDP and consumer spending were also seen peaking before the last 5 recessions as well...
We're all talking predictions, I don't think either of us should pretend to know the future, but there are counterpoints and so the data does not all look rosy.
I'd say these are symptoms (and I'm not denying them) rather than causes. My point is that it's hard to find hard data that would say the economy is doing poorly. Even unemployment, which is your top line, seems... fine?
I just don't understand where the squeeze is coming from. Either companies figured out how to do more with less people, or they started the cycle with too many people, or they don't know what they are doing. Undoubtedly they are laying people off, especially in tech. But I he symptoms you list don't explain it to me.
I don't think they're a symptom or a cause. Just indicators the economy may not be doing well.
> Even unemployment, which is your top line, seems... fine
My lines were in no particular order. The issue with unemployment data is it counts gig workers as "employed." What doesn't add up is that there are fewer job openings, mass layoffs, and rising long-term unemployment (people who can't find work past 6 months).
> I just don't understand where the squeeze is coming from.
Nobody really knows. It's hard to model the economy and identify cause and effect. But likely candidates are low competition, businesses with coercive leverage on pricing/pay since buyers and workers have no alternatives. Essentials like housing, health, and food have skyrocketed, and we haven't scaled them as demand grew. Companies have abandoned stakeholders, they only care about shareholders. They're squeezing record profits, sustained because buyers are supplementing with gig work, have all adults working, are taking on more debt (and there are more ways to get credit than before), or are abandoning their savings (YOLO).
> Undoubtedly they are laying people off, especially in tech. But the symptoms you list don't explain it to me.
My list wasn't about layoffs, just signals the economy may be doing poorly. One reason for layoffs is companies believe the economy is at risk. They're avoiding hyper-growth and cutting fat. In tech specifically, I think a lot of it is undoing the mess of Covid, such as ventures that didn't profit, hiring before knowing what to use people for, workers distributed across too many places. Even if one part is growing, redistributing is hard. Easier to lay off and rehire where needed. There's probably some offshoring too. But in general, cost-cutting happens when companies feel they need to be conservative.
Cause wise, we probably shouldn't ignore the delayed hangover from covid. But also the longer term trends towards an economy that is extractive rather than productive, and increasingly unequal, neither of which are sustainable.
> Even unemployment, which is your top line, seems... fine?
The unemployment one is interesting because if you look at that graph, the universal pre-2022 pattern is basically a spike of unemployment during recessions followed by a gradual drop.
The recent pattern is a gradual increase.
I'm not a big fan of "numerical only / shape of graphs" analyses, but this does seem strange. Of course, the 2020 Covid spike is also unusual, so...
I've paid for tools in the past, but I think there's a difference, the value of a lot of our tools isn't that great, but more importantly, there is a huge cost to adoption. Going in blind on a paid tool, putting in the time to learn and train yourself to use it, that's a high cost for something that you need to pay for entry and recurring after, that maybe 50 hours into it you start to realize you don't like it.
When I've paid for tools, it tends to be a tool that was free for me to start using, that is now part of my workflow and I love, and I am worried it won't continue to be maintained or updated so I pay for it.
I understand, but retired people rank highest on the happiness index, same as children, and the thing they have in common is nothing to do but play, relax, and have fun. Social housing probably doesn't allow for any form of play, and it's just scrapping by level of "surviving". I don't think it's a good example, and letting those people instead work 12h days, 7 day a week, at some repetitive, low pay, job, isn't gonna be all that better, and might be even more horrible.
ROFL
reply