Hacker Newsnew | past | comments | ask | show | jobs | submit | rav's commentslogin

Given a commit that both refactors (A) and adds a feature (B), you can go into the codebase and remove the new feature by hand (B^-1), commit the feature removal, and immediately revert the feature removal. This leads to three commits: (A B), B^-1, and B. Squash the first two commits to obtain a commit that only refactors, and another commit that only adds the new feature. I've written more about this technique ("the Hammer") here: https://github.com/Mortal/gittalk

In TypeScript it's called "bivariance", which sounds very programming language theory like, but is not a term typically used in academic contexts, since it is unsound almost by default. It's described here: https://www.typescriptlang.org/docs/handbook/type-compatibil...

Key sentence in their justification seems to be:

"allowing this enables many common JavaScript patterns"

Honestly at this point they should make a new strict "no js" mode, as the ecosystem likely has reached a tipping point where you can get by without mixing typed and untyped js at compile time. Wonder if targeting wasm directly would help ensure those boundaries are ensured...


https://www.assemblyscript.org/ is pretty darn close, but it would probably be better to converge and not split communities

How is it "50-90%" savings? If a given application doesn't repeat its queries, surely there's nothing to save by caching the responses?


Hey, Thanks for the great feedback! You're raising valid point.

Actually, this package started based on a hackathon project where I was burning the Anthropic API credits for our hackathon project which was RAG (internal documentation) + MCP.

There were question which were getting repeated several times. The 50% + comes from this experience. So, based on this, I was thinking of some of the use cases like this:

Multi-User Support/FAQ Systems: - How do I reset my password? - Reset password steps? - Forgot my password help - Password reset procedure

RAG based: - How to configure VM? - How to deploy? - How to create a network?

Educational/Training Apps Developer Testing scenarios, etc

You're absolutely right that apps with unique queries won't see these benefits - this won't help in - Personalized Content - Real-Time Data - User-Specific Queries - Creative Generation and other scenarios

I think I should clarify this in the docs. Thanks for the great feedback. This is my first opensource package and first conversation in hackernews. Great to interact and learn from all of you


You can rewrite it using let-or-else to get rid of the unwrap, which some would find to be more idiomatic.

    let value = Some(param.as_ref()) else {
      // Handle, and continue, return an error etc
    }
    // Use `value` as normal.


Small nit: the Some() pattern should go on the left side of the assignment:

    let Some(value) = param.as_ref() else {
      // Handle, and continue, return an error etc
    }
    // Use `value` as normal.


That part intrigued me about the article: I hadn't heard of that syntax! Will try.


At the moment it seems like you can avoid AI search results by either including swear words, or by using 'minus' to exclude terms (e.g. append -elephant to your search query).


I have done that on occasion, but it's easier to just scroll a bit.


I often run rm -rf as part of operations (for reasons), and my habit is to first run "sudo du -csh" on the paths to be deleted, check that the total size makes sense, and then up-arrow and replace "du -csh" with "rm -rf".


Use trash-cli and additionally git commit the target if you’re nervous before deletion.


My suggestion: Change the color of visited links! Adding a "visited" color for links will make it easier for visitors to see which posts they have already read.


On a small team I usually already know who wrote the code I'm reading, but it's nice to see if a block of code is all from the same point in time, or if some of the lines are the result of later bugfixing. It's also useful to find the associated pull request for a block of code, to see what issues were considered in code review, to know whether something that seems odd was discussed or glossed over when the code was merged in the first place.


I find the GitHub blame view indispensable for this kind of code archeology, as they also give you an easy way to traverse the history of lines of code. In blame, you can go back to the previous revision that changed the line and see the blame for that, and on and on.

I really want to find or build a tool that can automatically traverse history this way, like git-evolve-log.


I've been carrying around a copy of "git blameall" for years - looks like https://github.com/gnddev/git-blameall is the same one - that basically does this, but keeps it all interleaved in one output (which works pretty well for archeology, especially if you're looking at "work hardened" code.)

(Work hardening is a metalworking term where metal bent back and forth (or hammered) too much becomes brittle; an analogous effect shows up in code, where a piece of code that has been bugfixed a couple of times will probably be need more fixes; there was a published result a decade or so back about using this to focus QA efforts...)


"Cregit" tool might be of interest to you, it generates token-based (rather than line-based) git "blame" annotation views: https://github.com/cregit/cregit

Example output based on Linux kernel @ "Cregit-Linux: how code gets into the kernel": https://cregit.linuxsources.org/

I learned of Cregit recently--just submitted it to HN after seeing multiple recent HN comments discussing issues related to line-based "blame" annotation granularity:

"Cregit-Linux: how code gets into the kernel": https://news.ycombinator.com/item?id=43451654


there is https://github.com/emacsmirror/git-timemachine which is really nice if you use emacs.


Ooh, core.hooksPath is quite nifty. I usually use something like

         ln -sf ../../scripts/git-pre-commit-hook .git/hooks/pre-commit
which simply adds a pre-commit symlink to a script in the repo's scripts/ dir. But hooksPath seems better.


I don't understand what a dedicated "completely open source offering" provides or what your "five nines feature flag system" provides. If you're running on a simple system architecture, then you can sync some text files around, and if you have a more scalable distributed architecture, then you're probably already handling some kind of slowly-changing, centrally-managed system state at runtime (e.g. authentication/authorization, or in-app news updates, ...) where you can easily add another slowly-changing, centrally-managed bit of data to be synchronised. How do you measure the nines on a feature flag system, if you're not just listing the nines on your system as a whole?


> If you're running on a simple system architecture,

His point was that even a feature flag system in a complex environment with substantial functional and system requirements is worth building vs buying. If your needs are even simpler, then this statement is even more true!

I'm having a hard time making sense out of the rest of your comment, but in larger businesses the kinds of things you're dealing with are:

- low latency / staleness: You flip a flag, and you'll want to see the results "immediately", across all of the services in all of your datacenters. Think on the order of one second vs, say 60s.

- scalability: Every service in your entire business will want to check many feature flags on every single request. For a naive architecture this would trivially turn into ungodly QPS. Even if you took a simple caching approach (say cache and flush on the staleness window), you could be talking hundreds of thousands of QPS across all of your services. You'll probably want some combination of pull and push. You'll also need the service to be able to opt into the specific sets of flags that it cares about. Some services will need to be more promiscuous and won't know exactly which flags they need to know in advance.

- high availability: You want to use these flags everywhere, including your highest availability services. The best architecture for this is that there's not a hard dependency on a live service.

- supports complex rules: Many flags will have fairly complicated rules requiring local context from the currently executing service call. Something like: "If this customer's preferred language code is ja-JP, and they're using one of the following devices (Samsung Android blah, iPhone blargh), and they're running versions 1.1-1.4 of our app, then disable this feature". You don't want to duplicate this logic in every individual service, and you don't want to make an outgoing service call (remember, H/A), so you'll be shipping these rules down to the microservices, and you'll need a rules engine that they can execute locally.

- supports per-customer overrides: You'll often want to manually flip flags for specific customers regardless of the rules you have in place. These exclusion lists can get "large" when your customer base is very large, e.g. thousands of manual overrides for every single flag.

- access controls: You'll want to dictate who can modify these flags. For example, some eng teams will want to allow their PMs to flip certain flags, while others will want certain flags hands off.

- auditing: When something goes wrong, you'll want to know who changed which flags and why.

- tracking/reporting: You'll want to see which feature flags are being actively used so you can help teams track down "dead" feature flags.

This list isn't exhaustive (just what I could remember off the top of my head), but you can start to see why they're an endeavor in and of themselves and why products like LaunchDarkly exist.


> if you're not just listing the nines on your system as a whole

At scale the nines of your feature flagging system become the nines of your company.

We have a massive distributed systems architecture handling billions in daily payment volume, and flags are critical infra.

Teams use flags for different things. Feature rollout, beta test groups, migration/backfill states, or even critical control plane gates. The more central a team's services are as common platform infrastructure, the more important it is that they handle their flags appropriately, as the blast radius of outages can spiral outwards.

Teams have to be able to competently handle their own flags. You can't be sure what downstream teams are doing: if they're being safe, practicing good flag hygiene, failing closed/open, keeping sane defaults up to date, etc.

Mistakes with flags can cause undefined downstream behavior. Sometimes state corruption (eg. with complicated multi-stage migrations) or even thundering herds that take down systems all at once. You hope that teams take measures to prevent this, but you also have to help protect them from themselves.

> slowly-changing, centrally-managed system state at runtime

With flags being so essential, we have to be able to service them with near-perfect uptime. We must be able to handle application / cluster restart and make sure that downstream services come back up with the correct flag states for every app that uses flags. In the case of rolling restarts with a feature flag outage, the entire infrastructure could go hard down if you can't do this robustly. You're never given the luxury of knowing when the need might arise, so you have to engineer for resiliency.

An app can't start serving traffic with the wrong flags, or things could go wrong. So it's a hard critical dependency to make sure you're always available.

Feature flags sit so closely to your overall infrastructure shape that it's really not a great idea to outsource it. When you have traffic routing and service discovery listening to flags, do you really want LaunchDarkly managing that?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: