Hacker Newsnew | past | comments | ask | show | jobs | submit | hokkos's commentslogin

Why is everyone mad if they have better guaranties that anthropic use to have ?

Because they don't.

The blog states that they do and then proceeds to explain much less restrictive terms.


Most CVE now are pure spam without value, all I get is dev dependencies affected by regex that could take too long, scanner should do a better job to differentiate between dependencies and dev dependencies.


this ai is too much "aligned" to return anything of value, considering the content it has to look into and the questions it needs to answer.


Bring your own key and use Claude. We found that it's most willing to run deep research queries here.


Which model is being used? Is there an unrestricted open weight model that could be used?


What do you mean?

As in, OpenAI, Anthropic, and Google's models won't follow instructions regarding forensics for this?


Who here is naive enough to think that this little loop hole hasn't been nicely tied off?


They should use Grok, it feels the most open out of the big 4


If AI is soo productive why do they even sell it and don't hoard it for themselves to build a competing offer to everything ?


No one is claiming that level of productivity.


Oh yes they are. People are claiming 100x improvements, which is completely insane. But they do claim it.


OK sure, there are always lunatics on the fringe, but OP is casting that argument out as if they’re attacking a mainstream opinion.


looking at the code examples i don't see the point of JSX, seems to decrease type safety and typing completion


I use https://typespec.io to generate openapi, writing openapi yaml quickly became horrible past a few apis.


Ha yes, see one of my other comments to another reply.

I never got to use it when I last worked with OpenAPI but it seemed like the antidote to the verbosity. Glad to hear someone had positive experience with it. I'll definitely try it next time I get the chance


it's because you only do it once per project.


some lib literally publish a new package at every PR merged, so multiple times a day.


it reminds me of the EXI compression for XML that can be very optimized with a XSD Schema with a schema aware compression, that also use the schema graph for optimal compression : https://www.w3.org/TR/exi-primer/


I also have an elegant proof, but it does't quite fit in a HN comment.


No support for symbols, amirite?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: