> If it can do this in 13kb, it makes me wonder what it could do with more bytes.
Maybe I misunderstand, but is this not just the first baby steps of an LLM written in JS? "what it could do with more bytes" is surely "GPT2 in javascript"?
Author had LLM help them make a tree of words, and the algo choose which node we're at and offers children as completions. It's clever and cute but, not even close to an LLM.
It's very far off, like "not even wrong" in the Pauli sense of the phrase.
There's a lot of abstractions one can have for this stuff, I think you're looking at that "text predictor" is one of them?
If you roll with that, then you're in a position where you're saying GPT-2 class LLMs were very close in 1960, because at the end of the day, it's just a dictionary lookup with a string key and a value of list<string> completions. That confuses instead of illuminates.
The trouble with decision trees for language modeling is that they overfit really hard. They don't do the magical generalization that makes LLMs interesting.
There are countless of solutions that try to do this, both official and non official, both at package and repository level, npm from NodeJS comes with a security audit tool for example, and most code hosting solutions nowadays have at least a SAST tool built in, but expecting more from free services it's a bit of pipe dream.
Obviously it's hard to make a one-size-fits-all solutions, bottom line is that if you use third party code for anything serious you have to do your due diligence from a security pov, a vulnerability assessment at the bare minimum.
Lots of big companies are in fact maintaining their own versions of whole package ecosystems just to manually address any security concern, which is a crazy effort.
I guess I’d ask why you are going to a meeting (let alone a zoom meeting) where you don’t interact? If you’re not contributing, and you don’t need to listen, why go at all? Maybe I am lucky but I’ve never worked anywhere that people are expected to show up for meetings that are totally irrelevant to their work. I decline meetings all the time.
The thing is given enough beiracracy there are always meetings that are _technically_ relevant but not really.
An all hands where a director lays out a strategy that was already laid out in an email and will be reiterated in another email and also will be the OKRs? Sure that's relevant to my work but I'm not going to say anything, but nobody can really say they have better things to do that go to the directors meeting about the whole purpose of our job (even if it doesn't change my personal work)
Or even sometimes things like stand-ups where half the people are going to say something along the lines of "working on the same thing I've been working on all week with slight incremental progress"
we're not all lucky enough to work in places like that. Most of the meetings I go to are interactive ones where I'm contributing to the conversation, but every now and then we have these company-wide meetings that we all have to be at. They are a total waste of time and money, where 300+ employees are lectured on the sales performance stats and our business goals, and our CEO gets angry when people have their cameras off
There was a guy who applied to YC with this idea and although he didn't get in I think, the app was quite successful. I think it was called Beulr, but looking at their website it seems they have pivoted.
I think this bit from the FAQ is the info everyone on here is looking for
> Internally, the EXT platform is split into two parts. The first part is the frontend, which is built using Electron; the second is the control application, which is shipped as a separate executable file. When the frontend is launched, it runs the control application as a child process.
So basically EXT is an electron app which wraps around small web apps offered through ext.store
There were days when I didn't get all 3 things (or any of the things) done. There were days when I achieved more. But crucially, I never committed myself to more than 3.
I never felt stressed by my todo list as it was very manageable.
> If it can do this in 13kb, it makes me wonder what it could do with more bytes.
Maybe I misunderstand, but is this not just the first baby steps of an LLM written in JS? "what it could do with more bytes" is surely "GPT2 in javascript"?