Hacker Newsnew | past | comments | ask | show | jobs | submit | austin-cheney's commentslogin

1. AI is the new low barrier to entry. Before it was frameworks. Unskilled people wanting to participate in the software profession will still find a way to include themselves.

2. Politics is not a problem here as other comments claim. The problem is more religious like obsessions. The most intense that I remember seeing was anything about Julian Assange. The current one is defending Israel or related topics of racism/genocide/nationalism.

3. I stopped looking at YC startups once 90% of them became AI startups.

4. It’s not just you. Occasionally people will post search tools that provide alternate UI for HN submissions with everything AI removed. While some AI submissions are valid most are just noise.


Some people prefer evidence before investing large amounts of money and labor. That is not an indication of irrational behavior even if challenging your emotionally invested opinion or result.

I have noticed metrics make some people happy and others miserable. It’s dependent on personality and divergence.

That sounds like poor signaling in that you think you are doing all the correct things but all evidence points to the contrary.

Instead of focusing on the trends you might try to look at qualifications like education, certifications, security clearance, skill expertise, open source contributions, and so forth. Trends are a gravity. I recommend distancing yourself from the crowd to uniquely stand out. Then as edge case opportunities open recruiters come to you.


When I was writing JavaScript full time I wanted out. I was willing to do anything else for a fraction of the pay. JavaScript work is extremely low barrier to entry which means there are a great many people doing that work that probably shouldn’t be remotely close to any software profession.

Now I am managing a team of developers doing enterprise API management. I am still a developer with development assignments. I spend very little time writing code. Most of my time is spent working with people to gather requirements, perform operational concerns, and troubleshooting problems concerning multiple teams. I also have to spend a regular amount of time doing administrative tasks as a manager.

This new profession is so much better. Jumping between high level concerns of managing a large team against getting into the weeds as a single contributor gets exhausting. I really think the primary reason this work is so much better is that the barrier of entry is substantially higher, so you are surrounded by better people.


Two qualifiers.

Copyright applies to the work artifact and not the execution context, which is why source code has copyright protection that is enforceable while copyright isn't really enforceable against binaries. Its also why binaries are licensed separately from the source code, such as EULA verses code license.

Secondly, existing US case law says bots and AI cannot receive copyright protection as code authors. In the US all source code is protected by copyright by default as is any original written work, but if it can be proven software was written by AI then the work becomes beyond defense, which is a void in law. Its not the same as public domain. The only distinction is in regards to second and third order consequences which is clear for public domain works, but not so clear for indefensible works.


My browser CTRL+F could not find "naysays" in the linked thread, and yes I did expand the flagged comments.

The name of the game is framing. You don't talk about simplicity, because most people don't really understand what simplicity is. They falsely equate it to easy.

Instead you talk about how you complete all your tasks and have so much bandwidth remaining compared to all your peers, the beneficial results of simplicity. Being severely under used while demonstrating the ability to do 2x-10x more work than everybody else is what gets you promoted.

In this vein simplicity is like hard work. Nobody gives a shit about hard work either. Actually, if all you have to show is how hard you work you are a liability. Instead its all about how little you work provided and that you accomplish the same, or more, than everybody else.


Exactly, simplicity is a subjective term; some think of it as in Clean Code where codebases end up as oneliner functions or overly formal lasagna code with many clean-feeling layers, but they can't see the resulting complexity in the overarching architeture.

Except you spend extra time making your code simple, rather than slapping together something that requires extra maintenance from the eventual owners.

Ideally we need metrics saying, "my projects require 30% less support or 50% less brainpower than comparable projects". Things like "average cyclomatic complexity", etc.


Simplicity is like any form of automation in that there is always an expensive upfront cost. The automation pays for itself by reducing time per interval, so its only a matter of when break even occurs as derived by the savings per interval multiplied by intervals in a given frequency.

I am going to write an original in-memory database in JavaScript. I hate SQL and believe I can write something that executes faster than existing solutions while also feeling natural to JavaScript: storage and search via objects and arrays.

Interesting project. A few questions that came to mind: How do you handle GC pressure at scale? V8's hidden classes make homogeneous object arrays fast, but the per-object overhead adds up — 100K entries is already 6-8 MB of metadata alone, and major GC pauses become unpredictable. What's the persistence story? The moment you serialize to IndexedDB or OPFS, the "native structures" advantage disappears. Have you looked at columnar formats to keep it fast? How do you handle compound queries without a planner? Something like "age > 30 AND city = 'Paris' ORDER BY name" needs index selection strategy, otherwise you're back to full scans. The part I find most compelling is reactive queries — define a filter, then as objects land in the store (from DOM extraction, a WebSocket, whatever), results update incrementally via Proxy interception. No re-scan. That's not really a database, it's a live dataflow layer. Concrete example: a browser extension that extracts product data from whatever page you're on. Each page dumps heterogeneous objects into the store. A reactive query like "items where price < 50 and source contains 'amazon'" updates in real time as you browse. No server, no SQL, just JS objects flowing through live filters. That would be genuinely useful and hard to do well with existing tools.

I have not gotten far enough for that kind of load testing. I am working on this thing, but its still incomplete. My experience with GC related issues is that frequency of calls is more of a concern than size of the calls. So, I would have to monitor for memory spikes, which I can do from Node but not so much from the browser.

Quick question before going further: is this an exercise in language internals, or do you have a concrete use case in mind?

Asking because the answer changes the architecture significantly. If you're targeting live in-page data — extracting objects from the DOM as you browse, filtering them reactively — you may not need storage at all.

A Proxy-based observation layer gives you reactive queries without allocating anything new: the objects already exist in the tab's heap, you're just watching them mutate. No GC pressure, no persistence headaches, no query planner needed. That covers most of what you described: "items where price < 50 updates as you browse" is an event subscription with pattern matching, not a database problem.

The cases where you actually need storage — and therefore need to think about heap budgets, GC, serialization, query planning — are narrower:

Cross-session persistence (you want the data after the tab closes) Cross-tab aggregation (comparing prices across multiple open tabs simultaneously) Queries over historical data (not just what's on screen now, but what you saw across 20 pages of browsing)

Those are real storage problems.

But they're also the cases where you're competing with IndexedDB, OPFS, and SQLite WASM — and "I hate SQL" stops being enough of a reason to rebuild from scratch. What's the actual workflow you're trying to support?


It is mostly experimental, but there is a very tiny valid use case.

I have a strictly personal application at: https://github.com/prettydiff/aphorio

In that project I have various data artifacts stored in memory that I am constantly having to query in various ways:

* sockets per server

* servers and their sockets

* ports in use and by what

* docker containers and their active state

* various hardware and OS data lists

Currently all this data is just objects attached to a big object and all defined by TypeScript interfaces. I am storing the information just fine, but getting the information I need for a particular task and in task's format requires a variety of different logic and object definitions in the form of internal services.

Instead it would be nice to have information stores that contain all I need in the way a SQL database does with tables. Except I hate the SQL language, and its not as fast as you would think. Last I saw the fastest SQL based database is SQLite and its really only 3x faster than using primitive read/write streams to the file system. I can do must faster by not dicking around with language constructs.

My proposal is to store the database in something that vaguely resembles a database table but is really just JavaScript objects/arrays in memory as part of the application's current runtime and that can return artifacts in either object or array format. Query statements would be a format of JavaScript objects. I could have a table for server data, socket data, port data, and each record links back to records in other tables as necessary, kind of like SQL foreign keys. So in short: stores, functions to do all the work, and storage format that can be either objects or arrays and take both objects and arrays as queries.

The reason I want to store the data as both objects and arrays is a performance hack discovered by Paul Heckel in 1978. The stores would actually be a collection of objects against a unique primary key that can be reference as though it were an array.


Three real risks in your current approach before anything else. Shared references mutate silently — in JS, objects passed between your "tables" are aliases, not copies, so a mutation in one place propagates everywhere with no transaction and no rollback. No atomicity — Node is single-threaded but async I/O means two callbacks can interleave writes on the same structure with no guarantee a multi-step update lands cleanly. And everything disappears on crash — for socket/port/container state that's probably fine since it's observable from the system anyway, but you have no history.

That said, you may not need to leave your stack at all. V8's native Map is already a key-value store — O(1) reads, no overhead, typed in TypeScript. Your "tables" are just Maps and cross-referencing is composite string keys:

sockets.set(serverId:{serverId}: serverId:{socketId} , socketData). No library, no dependency, no SQL. This covers your use case as described.

If you want ACID transactions and persistence without SQL, look at lmdb-js — a Node binding on LMDB, the fastest embedded KV store in existence, zero-copy reads, used in production for 20 years. Your tables become named databases, your records are typed values, your cross-references are composite keys. Same mental model you're building, with 20 years of correctness guarantees underneath.

What's the actual reason for building from scratch rather than using native Map for the in-memory case?


Just start building things. Every once in a while you will build something no one else can. It’s not that you are the most brilliant person ever but that you have found a niche that isn’t trendy. When that niche also solves real problems my interest grows really high.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: