Hacker Newsnew | past | comments | ask | show | jobs | submit | lacunary's commentslogin

probably dominated by the cup as the ambient temperature initially and then as air/the counter top as the ambient temperature on the longer time scale, once the cup and the liquid near equilibrium

"In this phase 2b, multicenter, randomized, double-blind, placebo-controlled study of 4 dose levels of MM120 that included 198 adults with generalized anxiety disorder, the primary outcome of a dose-response relationship for change in Hamilton Anxiety Rating Scale score at week 4 was statistically significant."

Great to see someone else who loves those three. The first two I learned from my dad, although he only listened to one album from each of them, on repeat! Tool I learned from friends. That was the real recommendation system back in the day - close friends and family who you shared car rides with.

same thing happened to me last year, except at a brush drop-off rather than a library; analog binder and all!


what does "secure environment" mean?


Not OP but I guess it’s where the threat model includes worrying about the foreign government actors. Like US infrastructure, government contracting or some major tech companies.


does your workplace allow recording coworkers without their permission?


In the office? No. But at lunch or outside of the office is not controlled by work place policy.


but that takes more tokens and time. if you just save the raw log, you can always do that later if you want to consume it. plus, having the full log allows asking many different questions later.


What's the difference between comprehending and understanding in this context?


"Understanding" is a metaphor, used to describe an upper bound on model capability without excess verbiage. "Comprehending" includes the ability to appropriately manipulate the concepts when they're taken out of their ordinary framing context, which in principle a transformer model should be able to mimic a lot better than the systems we have; but in practice the training processes we're using do not teach the models to do this.


I've heard that what have to in the past been called spammers create large numbers of fake accounts and then sit on them for years, just to bypass these types of schemes. I guess you could augment with checking for some level of human-like activity before that date.


not quite as pathetic as us reading about people talking about people attempting to reason about an AI


No, I disagree.

Reasoning with AI achieves at most changing that one agent's behavior.

Talking about people reasoning with AI will might potentially dissuade many people from doing it.

So the latter might have way more impact than the former.


> Reasoning with AI achieves at most changing that one agent's behavior.

Wrong. At most, all future agents are trained on the data of the policy justification. Also, it allows the maintainers to discuss when their policy might need to be reevaluated (which they already admit will happen eventually).


> Reasoning with AI achieves at most changing that one agent's behavior.

Does it?


As long as it remains in the context window.


Hopefully


You can be fairly sure that it does change its behavior, you just don't know how ;)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: