Hacker Newsnew | past | comments | ask | show | jobs | submit | jeeybee's commentslogin

No config step, the tools discover everything from pg_catalog at call time. list_schemas → list_tables → describe_table is the typical agent workflow, and there's a query_guide prompt baked in that suggests that progression.

On query guardrails: every query runs in a readonly transaction and results are capped at 500 rows via a wrapping SELECT * FROM (...) sub LIMIT 500. There's also explain_query which returns the plan without executing, so agents can check before running something expensive. That said, there's no cost-based gate that blocks a bad plan automatically; that's an interesting idea worth exploring.


Thanks! Read-only felt like the obvious constraint; agents shouldn't need write access to understand a database.

Most Postgres MCP servers expose query and list_tables. Agents end up guessing column values, enum casing, and join paths - then retrying until something works.

pglens gives agents the context to get it right the first time: column_values shows real distinct values with counts, find_join_path does BFS over the FK graph and returns join conditions through intermediate tables, describe_table gives columns/PKs/FKs/indexes in one call. Plus production health tools like bloat_stats, blocking_locks, and sequence_health.

Everything runs in readonly transactions, identifiers escaped via Postgres's quote_ident(), no extensions required. Works on any Postgres 12+ (self-hosted, RDS, Aurora, etc.). Two dependencies: asyncpg and mcp.

https://github.com/janbjorge/pglens

pip install pglen


I maintain a small Postgres-native job queue for Python called PGQueuer: https://github.com/janbjorge/pgqueuer

It uses the same core primitives people are discussing here (FOR UPDATE SKIP LOCKED for claiming work; LISTEN/NOTIFY to wake workers), plus priorities, scheduled jobs, retries, heartbeats/visibility timeouts, and SQL-friendly observability. If you’re already on Postgres and want a pragmatic “just use Postgres” queue, it might be a useful reference / drop-in.


If you like the “use Postgres until it breaks” approach, there’s a middle ground between hand-rolling and running Kafka/Redis/Rabbit: PGQueuer.

PGQueuer is a small Python library that turns Postgres into a durable job queue using the same primitives discussed here — `FOR UPDATE SKIP LOCKED` for safe concurrent dequeue and `LISTEN/NOTIFY` to wake workers without tight polling. It’s for background jobs (not a Kafka replacement), and it shines when your app already depends on Postgres.

Nice-to-haves without extra infra: per-entrypoint concurrency limits, retries/backoff, scheduling (cron-like), graceful shutdown, simple CLI install/migrations. If/when you truly outgrow it, you can move to Kafka with a clearer picture of your needs.

Repo: https://github.com/janbjorge/pgqueuer

Disclosure: I maintain PGQueuer.


I’ve always loved Slack. It’s been core to how we work, and I’ve recommended it to countless others.

But seeing how they just treated Hack Club — sudden 40x price hike, almost no notice, threatening to cut off access and delete 11 years of history — makes me wonder if we should rethink where we build our work.

I don’t want to leave Slack. But I also don’t want to wake up one day with our team’s history held hostage.


pgqueuer turns vanilla PostgreSQL into horizontally-scalable job queue with zero extra infrastructure. If you’re running Postgres and want to ditch extra queue clusters, give it a spin and let me know how it holds up.

https://github.com/janbjorge/pgqueuer


I've observed something interesting: my toddler can quickly and somewhat effortlessly solve a simple physical puzzle, yet when I try to prompt large language models (LLMs) to guide me clearly through the solution, they struggle.

Even with clear photographs and detailed instructions, LLMs tend to give general advice or indirect methods rather than precise, actionable steps that clearly demonstrate correctness.

Have you tried something similar? Have you successfully "prompt-engineered" an LLM into giving clear, precise, step-by-step solutions for physical puzzles? If yes, what's your approach?


Kudos to the author for diving in and uncovering the real story here. The Python 3.14 tail-call interpreter is still a nice improvement (any few-percent gain in a language runtime is hard-won), just not a magic 15% free lunch. More importantly, this incident gave us valuable lessons about benchmarking rigor and the importance of testing across environments. It even helped surface a compiler bug that can now be fixed for everyone’s benefit. It’s the kind of deep-dive that makes you double-check the next big performance claim. Perhaps the most thought-provoking question is: how many other “X% faster” results out there are actually due to benchmarking artifacts or unknown regressions? And how can we better guard against these pitfalls in the future?


I guess the bigger question for me is, how was a 10% drop in Python performance not detected when that faulty compiler feature was pushed? Do we not benchmark the compilers themselves? Do the existing benchmarks on the compiler or python side not use that specific compiler?


The author makes this point, too, and I agree it’s the most surprising thing about the entire scenario.

LLVM introduced a major CPython performance regression, and nobody noticed for six months?


As far as I am aware, the official CPython binaries on Linux have always been built using GCC, so you will have to build your own CPython using both Clang 18 and 19 to notice the speed difference. I think this is partly why no one has noticed the speed difference yet.


This post is a perfect reminder of Gandhi’s advice: "Be the change you want to see in the world." We can’t expect kids to trust or wait for better outcomes unless we first model consistency and patience ourselves. Our everyday actions—keeping promises, showing reliability—are the real lessons that shape their future.


That's the best and the simplest way to put it. All these ways of how-to-teach kids X seems quite a bit of over analysis.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: