Hacker Newsnew | past | comments | ask | show | jobs | submit | norvig's commentslogin

Larry Wall is Brat.


Thank you for archiving!


I had "Danny Dunn and the Homework Machine" as a child. I don't think it had a direct influence on my career in AI, but when I had a college job as a computer help desk consultant, I would keep it on my desk, and when a student would ask "This is a really strange question, but ..." and I had heard the same thing multiple times before, I would respond "Hmm, let me consult a reference manual", flip through "Danny" for a bit, and then give the answer that I already knew.


"Joins Stanford HAI" is correct; "Leaves Google" is not right–I'm keeping my Google badge, but will spend most of my time at Stanford.


Hey Dr Norvig, really enjoyed your podcast episode with Lex Fridman a couple of years ago. Once you get settled in it would be great to hear an update with how its going, some color around the background and objectives of the program and maybe just riff on the subject for a bit. Thanks!


I think sadly Fridman has since left the path of conducting interesting interviews with accomplished AI researchers and now caters to a kind of vapid pseudo-philosophical TED-talk crowd.


I mean, there's no question that he chose to branch out from strictly hard-core AI researchers. And I don't watch every episode, so I may have missed some of the "bad" ones, but from what I've seen, most of his interviewees are still respected / credible scientists, with a smattering of "other" mixed in here and there. In the past month or so he's had Jeffrey Shainline[1], Travis Oliphant[2], Jay McClelland[3], Douglas Lenat[4], Donald Knuth[5] and Joscha Bach[6] as guests. That's a pretty impressive group, IMO.

[1]: https://www.nist.gov/people/jeff-shainline

[2]: https://en.wikipedia.org/wiki/Travis_Oliphant

[3]: https://stanford.edu/~jlmcc/

[4]: https://en.wikipedia.org/wiki/Douglas_Lenat

[5]: https://www-cs-faculty.stanford.edu/~knuth/

[6]: http://bach.ai/


Yea I think I was overly harsh in my comment, see sibling reply for why this 'other' category gets me so riled up. If you ask me Joscha Bach is also somewhat in the category I mentioned, but I see your point.


Interesting. When I look at Joscha's background and work[1][2][3], he seems pretty credible to me. Is there anything specific he's said/done that puts him in your "other" category?

[1]: https://en.wikipedia.org/wiki/Joscha_Bach

[2]: https://scholar.google.com/citations?user=Q_yeuCUAAAAJ&hl=en...

[3]: https://www.amazon.com/Principles-Synthetic-Intelligence-Arc...


The only thing I know him from is his CCC talk, which is definitely interesting, even dazzling for all the ideas it ties together, but in the end it isn't really presenting anything new, so to me it is a bit of intellectual popcorn, like a TED-talk. I don't know anything about his actual research (which I think is unrelated to most of what he talks about) even though I did my PhD in an adjacent field. I believe as a researcher/teacher he is not in the same category as Peter Norvig and some of the other people you mentioned.


Gotcha. Sounds like we may have a difference of perspective. I am not familiar with the CCC talk you speak of, and am mostly familiar with Joscha's work (to the extent that I am, which is not "deeply" so) from his work on MicroPsi[1].

[1]: http://www.cognitive-ai.com/page2/page2.html


I knew about Psi theory, from Dietrich Dörner, but not this.


I'm totally not an expert on this, but my understanding is that Bach's work on MicroPsi is a follow on / extension of Dörner's Psi theory.


Joscha is actually very accessible on Twitter (at least in replies to his tweets). Might be work picking apart one that sits funny with you to test your understanding of his positions.


What interviews make you think that?


There were several that gave me this impression but Eric Weinstein and one about UFOs (forgot the name of the interviewee) come to mind.


I'm guessing you are referring to David Fravor[1] when you say "the one about UFO's".

I dunno. I'm a skeptic by nature (not just of UFO's, etc., but of almost everything) and I watched that episode and thought it was good. Fravor seemed like a sharp, knowledgeable, down-to-earth guy who was simply stating what he experienced... and went to great lengths to be clear that he wasn't necessarily positing that what he saw was caused by Little Green Men from Mars.

FWIW, I don't believe that intelligent aliens are visiting Earth, although I do believe that it's likely that there (is|was|will be) intelligent life elsewhere in our universe at some point in time. Given that bias, I didn't find anything particularly objectionable in the Fravor episode. But perspectives vary, of course...

[1]: https://lexfridman.com/david-fravor/


It's ok man, it's not a religion. You can skip those if you don't like them. He's also had James Gosling, Don Knuth (x2), James Keller (x2), Brian Kernighan and a bunch of other legends.


You are right of course, but this whole phenomenon really rubs me the wrong way. There is a whole host of podcasts now that provide a large audience to borderline crackpots. I believe it actually pushes people who are maybe otherwise quite intelligent to adopt such theories, because of the large audience that can now be reached with such stuff. Another good example is Avi Loeb and the Alien-Omuamua theory, explained quite well here: https://www.reddit.com/r/slatestarcodex/comments/o1dhlf/comm...


One great reason to come to HN is that you can find a thread on any tech personality, and semi often they'll just show up in the comments.


I almost answered "who are you to talk about P.N." to his own comment :cough:


We're quirky like that.


After reading and running his code etc, it is a shock to see him turn up here.


Why, do you think he doesn't use the internet?


> norvig.com is protected by Imunify360

> We have noticed an unusual activity from your IP and blocked access to this website.


A huge fan of those string algorithms and Those notebooks. Real craftsmanship.


Big fan of your spellchecker


Nice Peter.


I ran Search for 5 years or so; then ran all of Research for the next 5; then had an increasing smaller portion of a huge growing Research initiative. This past year I enjoyed mentoring startups in ML through the Google for Startups program. But m0gz got it pretty much right.


Sorry for a snarky comment, but doesn't that cover the time when HN started noticing, that search stopped returning results for the query that was requested, and instead started to be "too clever" about it with no way to override?


The time period he's talking about is 2001-2006. HN didn't even exist then. That was when Google was basically like magic and let you find stuff you never knew existed.

HN started complaining about Google being too spammy around 2008, and then about results getting too clever around 2012 or so.


This was not clear because one of the slices did not mention the span it lasted.


Did your job require you to code?

How did you manage to keep sharp at coding?


I use "Ali" and "Bo", because they are (a) gender-neutral, (b) less Western, and (c) prefixes of "Alice" and "Bob".


If you are using the English language and names chosen to have unique starting letters in the Latin alphabet, starting with the first letter of that alphabet and moving forward, you have not meaningfully avoided Western (and more specifically Anglosphere) cultural bias, and avoiding domain-specific conventional names for that reason is a particularly meaningless gesture.


…and retain the important property of being in alphabetical order.


to gain... absolutely nothing.

I would even go as far as to claim you are devaluing your work by making your examples more confusing to the reader.

Why not call A and B, 漢 and الْحُرُو

That would surely be better


I agree, tuvistavie! But my default machine has 3.7 installed, and I didn't want to get too far ahead of readers. Sometime next year I'll go to 3.8, and look forward to using ":="


madhadron, I'm a Common Lisper, so I'm good with `first` and `rest`.


Good point, dmurray. It was a subtle point here, and probably I should have commented on it.


I regret causing confusion here. It turns out that this correlation was true on the initial small data set, but after gathering more data, the correlation went away. So the real lesson should be: "if you gather data on a lot of low-frequency events, some of them will display a spurious correlation, about which you can make up a story."


So can you now say with some confidence that competition performance doesn't correlate with job performance? That's still kind of interesting, although less so than the original conclusion.

The explanation of the effect did seem a bit too convenient.


You and/or Google HR were also prominently quoted as saying, IIRC, that GPA, standardized test scores (and interview ratings?) had no observable correlation with job performance either. I always wrote that off as just range restriction/Berkson's paradox, but did those also go away?


This made me reread the article[0] again (if we were talking about the same one) and I don't see any mention of interview ratings and job performance.

[0] - https://www.nytimes.com/2013/06/20/business/in-head-hunting-...


Hm? It's right there at the start:

"Years ago, we did a study to determine whether anyone at Google is particularly good at hiring. We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship.

...One of the things we’ve seen from all our data crunching is that G.P.A.’s are worthless as a criteria for hiring, and test scores are worthless — no correlation at all except for brand-new college grads, where there’s a slight correlation. Google famously used to ask everyone for a transcript and G.P.A.’s and test scores, but we don’t anymore, unless you’re just a few years out of school. We found that they don’t predict anything."


Seems counterintuitive. Naively, high GPA = High work ethic + IQ, which surely plays a role in job performance, no?


Of course, but that's where selection processes and psychometric considerations start to play havoc with naive correlations.


Interview ratings would be a surprising one, hard to imagine not overhauling the system after discovering that.


I remember reading something about Google interview ratings being uncorrelated to job performance, but that's clearly a conditional correlation based on the person being hired. For all we know, the ratings might be highly correlated with job performance up until the hire/no-hire cutoff. After all, their primary purpose is to make that binary hire/no-hire decision. Hopefully, the scoring system is hyper-optimized to be a good signal right around the hire/no-hire boundary, as the scores themselves aren't that useful for obvious hires and obvious no-hires: the scores are a decision-making tool.

In order to really get a good assessment of if the interview ratings were effective, they'd need to also hire some random unbiased sample of those who fail the interview process. There are alternative ways of slicing the data to help give insight, such as looking at only those who barely passed the interview process, or looking only at the bottom 10% of performers. However, when you're looking at such a highly biased sample (only the small-ish percentage of people hired), it's hard to say what the correlation is across the entire interview population.

At the risk of repeating myself, we don't particularly care the predictive power of the scores across the whole range, only their predictive power across those who aren't obvious no-hires and those who aren't obvious hires. That's the range where the power of the interview scores as a decision-making tool is most important.

Also, if two metrics disagree, it's not clear which one is problematic. It's possible that a poor correlation indicates that there's a problem with the performance rating system.


> I remember reading something about Google interview ratings being uncorrelated to job performance

You haven't, Googles interviews are correlated to job performance. They have data on it internally, people who work there can look. What you probably read was that brain teasers like "why are manhole covers round" doesn't correlate to job performance.


It was while I worked there that I read something briefly about them being uncorrelated, but it was probably just some popular press miss-characterization. I'd done over 100 interviews for Google SWEs, and wasn't aware of where to look up the data internally. In any case, the article I read wasn't interesting enough for me to do more digging.

I guess I should have been more critical at the time. Thanks for the clarification. Is it widely known where to look up this data internally now? I left Google over a decade ago.


When I was there I just searched it on moma and they had a paper on it showing the correlation coefficients.


Also, for any given title, ppl on HN will come up with anecdata to support it's assertion.


Thanks for clearing that up. Was job performance still positively correlated with higher gpa on that larger dataset?


Notice this comment is from Peter Norvig. ^^^^^


Ah, I remember hearing this from you in person in 2011 and have repeated it occasionally since. Thanks for the update!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: