Hacker Newsnew | past | comments | ask | show | jobs | submit | org3's commentslogin


I wasn't sure how to do this, so I looked into it:

If you (setq org-pretty-entities t) below will render correctly in org buffers, and exports correctly too.

The logs come to about \tilde{}300 lines once you start the server with ~systemctl start fnord~

Here's a star: \ast


Some people say we're near the end of pre-training scaling, and RLHF etc is going to be more important in the future. I'm interested in trying out systems like https://github.com/OpenPipe/ART to be able to train agents to work on a particular codebase and learn from my development logs and previous interactions with agents.


Can’t Pulumi be used to bring up infra and workloads as well?


I do!




I couldn't get "designing data intensive applications" to explain to me how to design a graph database (from scratch, without using existing graph frameworks or technologies), but it only suggested reasons why graph databases are useful and the properties I have to keep in mind while designing it. I want to know how I can build one in practice.

Using a prompt like "Tell me how to build a graph database from scratch. Specifically, how to design the data model, implement the data storage layer, and design the query language." only gives a very vague answer. Sometimes it suggests using existing technologies.

Anyone know what I'm missing?


I don't really think that book is about building a graph database from scratch


You're probably right.

One of my initial prompts mentioned graph databases as an example of a scalable system, so I wanted to ask it about the design properties that make it so. I figured that because it was a book about designing systems, it could give me an outline of how a graph database works in practice.

It's pretty annoying how the site erases your prompt once you receive your output. By the time it finishes loading I've half forgotten what my original question was.


Incredible results to my questions. Do these work by finding similar pieces of text from a vector DB, and then embedding those similar pieces of text in the prompt? The answers I'm getting seem to be comprehensive, as if it has considered large amounts of book text, curious how this works as there's an OpenAI token limit. I've heard this is what tools like langchain can help with, so maybe I should play around with that as this all seems like a mystery to me.


For more context, hn post by maker here: https://news.ycombinator.com/item?id=34635338


Wow! I guess some of the answers on questions I tried were pretty generic, but I can already see a value in it, and it's only a beginning.


Some of the responses I've had so far to this are remarkable. Kind of scary.


How legal is something like this?


Genuinely unknown at this time. At some point this will be litigated in court, and if the parties don't end up settling, we'll then have some precedent that can answer your question.


Fascinating, thanks


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: