First time I saw a post here of her I found it odd, and made me think, now I know it's one of the things that makes HN, HN and I appreciate. To make me think.
Thanks, glad you like the tool! That's exactly the plan.
The goal for v0.1 was to build the evaluation harness first (the scoring part). Now that it's in place, adding more strategies like HierarchicalChunker to the 'test bench' is the perfect next step.
I've added it to the roadmap!
A good thing is to incorporate some old school observability and benchmarking, MLFLOW has been around for some time. You could push some some parameters to that to track your scores, and you could use Meta's AX optimisation framework to finetuning the settings (hyperparameters)
The biggest downside of knockout is that it parses the template from the dom, and the template is rendered as dom until first execution. Then that it eval it's bindings. I suppose tko should help with those issues but seems kinda dead.
Knockout reactivity primitives are also a lot more naive then modern signals implementations.
Not sure how to react. This is the second time in a month that someone thinks I used AI to write an HN post.
All I can say is that I didn't, and thank you for implying that it was so well written that it could only have been authored by a machine that has all of humanity's cultural output to hand.
I like it but I always miss features or defaults like:
- internal network only with edge nodes (i.e tail scale out the box, + some edge nodes)
- option to deploy on multiple servers to scale with super simple non k8s approach.
Like 10 nodes behind tailscale/wireguard in a private network, with only 2 nodes where you have a port open on 80/443, those are exposed to the public network. The rest of the nodes are all private like db, redis, etc etc.
Check out https://github.com/psviderski/uncloud I'm building. Multi-machine deployments and a private WireGuard network spanning locations (even behind a NAT) are its core capabilities.
I think a feature like this sees best use in short lived programs (where startup time is a disproportionate percentage of total run time) and programs where really fast startup is essential. There are plenty of places where I could imagine taking advantage of this in my code at work immediately, but I share your concern about unpredictability when libraries we use are also making use of it. It wouldn't be fun to have to dive into dependencies to see what needs to be touched to trigger lazy imports at the most convenient time. Unless I am misunderstanding and a normal import of a module means that all of its lazy imports also become non-lazy?
reply