Hacker Newsnew | past | comments | ask | show | jobs | submit | acbart's commentslogin

Last time I dove into its research, I found that Math Blaster had no impact on student learning.

The witness was fine, but The Looker was much better.

I think The Looker might appeal both to those who loved The Witness and those who didn’t. Several laugh-out-loud moments, some fun puzzles of its own - a wonderfully silly send-up, and free, too.

https://store.steampowered.com/app/1985690/The_Looker/


Tunic is such an incredible experience. If you ever enjoyed the original Zelda and its manual, you simply must play tunic. It captures something incredible. And it has some amazing twists.

But it's hardly what I would consider a puzzle game.

You could consider it a jigsaw puzzle game.

I'd say it's either an action-adventure game with some mystery, or a mystery game with some action-adventure, depending on the player.

TV has the “puzzle box” genre and this is a much better fit for what Tunic is

Where are you seeing CS classes with increasing enrollment? Everyone I know is saying they're seeing smaller classes. Maybe some upper division from the last swell, but we're all definitely declining this year and last year, from what I'm seeing.

I agree, I think many people who rail against exams underestimate how important memory is to more complicated skills. How can you debug a complex application if you have to keep looking up every operator and keyword in the language you're using? It'd be like trying to interpret poetry in a foreign language but you have to look up every single noun. I'm not saying people can't do it, but it's tedious, slow, and you probably wouldn't think of them as a "professional worth paying for their service". Some amount of memorization is key.

It still doesn't feel to me that those things are similar. A sit-down exam is a time-limited, high-pressure situation where you're expected to demonstrate proficiency in the things you've learned over the past several months. Sure, much of that learning builds on stuff you've learned previously, but the focus is on the prior semester (or half-semeter, for mid-terms).

When I sit down to debug a complex application, I'm drawing my prior 25+ years of experience. While I certainly would rather fix the problem faster rather than slower, I don't have a time limit, and usually taking my time (or even leaving the problem alone for hours or days) can be more effective than trying to work quickly and get everything done immediate.

The last time I sat for an exam was in 2003, and I honestly have not experienced anything in life since then that feels like that. Even job interviews have not felt similar enough to me to evoke that same feeling. (Frankly, I've enjoyed most job interviews; I don't think I've ever enjoyed an exam.) That's just my experience, of course, but I don't feel like an outlier.


Exams happen all the time in real life. Or rather, situations where you can't just look up fundamental knowledge. Job interviews, presentations, even mundane work tasks - all these require you to know the basics quickly "The basics" are relative, of course, but I often point out to my students: "you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'." Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental. From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.

I think many of us would agree that job interviews (in tech at least) are horribly broken, because they don't do a good job of testing candidates' ability to do the actual work they'll be doing day-to-day. So saying exams are like job interviews is not a positive for exams. And, for most people, the ideal is to find a job and stick with it for years, so it's not like job interviews are common, everyday occurrences.

For presentations, usually you spend a lot of time preparing for them (similar to exams), building a slide deck or pages of notes that you refer to while giving the talk (not similar to exams). Sure, you do have to be able to think on your feet, but I don't think the comparison to a sit-down exam is all that apt.

For mundane work tasks, you have the internet and whatever reference materials you want (including LLMs, these days); this sort of thing is so different from a sit-down exam that it's almost comical that you'd try to equate the two.

I'm not saying I know of a better way to evaluate learning than proctored, in-person exams, but suggesting that sort of situation is particularly relevant to real life... no, no way.


Having been both a data analyst and software engineer I agree. The data analyst one? Here is 50K of Excel rows with all kinds of weirdness in it, you're data analyst right? You have 4 hours to analyze this data. Go!

The software engineer one: here is a takehome assignment. One week later: finished!

To be fair, they both represented pretty well what work I'm going to do. The data analyst didn't show that well how much I'd also be data engineering, but whatever, I was a SWE before having a DA stint. Back to SWE again though.


>you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'.

Sounds like you're saying that it's acceptable to be a little foggy about the limits of your knowledge, as long as you remember the core foundations. For a first year medicine student, the edge of their knowledge will include things that are the core foundation of a practicing doctor. Why should such a student be tested as if he already had several years of familiarity with the subject when this is all relatively new material to him?


I don’t completely disagree with this and I do appreciate the well thought out response. I should say that most of my recent relevant experience was as a grad student teaching assistant so I’m neither just somebody grouching that I didn’t like taking exams, nor somebody with deep experience setting the things up. Some respectful quibbles, then, since I gather you have more direct experience than me.

> Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental.

I don’t think this is how people actually engage with exams. I had a lot of folks in office hours who treat the exam as the ceiling of their competence, rather than the floor, and do things like cram or try to figure out exactly what topics will be on the test to study just those. If the goal is to establish a 100% solid foundation for things you have to know to be a professional (which I think is a great goal), I prefer something like Mastery Learning to the conventional exam process. (Maybe we could call Mastery Learning conventional exams a different set of thresholds, unusual thresholds if we want to look at it that way).

> From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.

I worked with some professors who I thought gave good exams, some who gave less good ones, so I don’t think the premise is completely divorced from reality. But it seems more like something the good instructors overcame, rather than a construct that is really helpful.


So at 50%, someone who uses AI to get 100% of the homework grade will earn a D (sometimes passing) if they can get at least a 20% on your quizzes, and a C (always passing) if they get at least a 40%. Did you make your exam so difficult that students who truly didn't learn the material earn less than 20-40%? Because if it was, say, multiple choice questions with four possible answers, then you can expect them to earn at least 25% just by chance.

My quizzes are written responses, psuedocode and annotating code.

While that answers their direct question, they do bring up a good point -- how often are you handing out less than 25% scores on exams? Id imagine any professor to do that to get some severe criticism that would make even a cheater pretty livid

https://scholar.google.com/citations?user=hpxl9PEAAAAJ&hl=en

The researcher seems to be real, at least? Perhaps the quote has not previously been written down?


It makes it easier to make sure it runs right. Code that is easier to make sure is quality code. Code that is hard to make sure is not quality code.


Yes, exactly. I'm having a frustrating time reminding senior teachers of this, people with authority who should really know better. There seems to be some delusion that this technology will somehow change how people learn in a fundamental way.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: