Hacker Newsnew | past | comments | ask | show | jobs | submit | datacynic's commentslogin

I also suspect that there are many "slow-moving", Microsoft heavy enterprises but with in-house devs that can't get anything but Copilot approved, and Microsoft trusts this will remain so.

It's not turning consumption based because there are a ton of these licenses just sitting idle.


As a single data point, this is absolutely true. At my current "Big Corp", Copilot was immediately approved while Claude is entering month 2 or 3 of trying to get approval.

Additionally, we got copilot for every user, including those that never write code or use AI tools.


I like this Tufte quote from https://www.edwardtufte.com/notebook/book-design-advice-and-...:

It is also notable that the Feynman lectures (3 volumes) write about all of physics in 1800 pages, using only 2 levels of hierarchical headings: chapters and A-level heads in the text. It also uses the methodology of sentences which then cumulate sequentially into paragraphs, rather than the grunts of bullet points. Undergraduate Caltech physics is very complicated material, but it didn’t require an elaborate hierarchy to organize.

I think about it a lot when reading markdown feature-driven writing or catching myself doing it.


Writing documentation for LLMs is strangely pleasing because you have very linear returns for every bit of effort you spend on improving its quality and the feedback loop is very tight. When writing for humans, especially internal documentation, I’ve found that these returns are quickly diminishing or even negative as it’s difficult to know if people even read it or if they didn’t understand it or if it was incomplete.


DuckLake is more comparable to Iceberg and Delta than to raw parquet files. Iceberg requires a catalog layer too, a file system based one at its simplest. For DuckLake any RDBMS will do, including fs-based ones like DuckDB and SQLite. The difference is that DuckLake will use that database with all its ACID goodness for all metadata operations and there is no need to implement transactional semantics over a REST or object storage API.


https://www.sirlin.net/articles/designing-defensively-guilty...

This is probably it. As the article says, it’s also a cleverly layered mechanic if a player correctly predicts when their opponent will use it.


Yup that’s the one. Idk why I always forget about sirlin but it’s good shit


I also was nerdsniped into trying this and found that after extracting the features array into a newline delimited json file, DuckDB finishes the example query in 500 ms (M1 Mac), querying the 1.3 GB json file directly with read_json!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: