PSA: npm/bun/pnpm/uv now all support setting a minimum release age for packages.
I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default.
Here's how to set global configs to set min release age to 7 days:
(Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)
If you're developing with LLM agents, you should also update your AGENTS.md/CLAUDE.md file with some guidance on how to handle failures stemming from this config as they will cause the agent to unproductively spin its wheels.
Probably went with the simplest implementation, if starting from the current “seconds since epoch” value. Let the user do any calculations needed to translate three days into that measurement.
It also efficiently annoys the most people at once: those what want hours will complain if they set it to days, thought that want days will complain if hours are used. By using minutes or seconds you can wind up both segments while not offend those who rightly don't care because they can cope with a little arithmetic :)
Though doing what sleep(1) does would be my preference: default to seconds but allow m/h/d to be added to change that.
Hence the way I would do it (and have for other purposes), as stated in my final sentence. Have the human state the intent and convert to your own internally preferred units as needed.
No no no, see now we just say "computer! do tedious math!", and it will do some slightly different math for us and compliment us on having asked it to do so.
The one true unit of time is hexadecimal encoded nanoseconds since the unix epoch. (I'm only half joking because I actually have authored code that used that before.)
I actually think it is not too bad a design, because seconds are the SI base unit for time. Putting something like "x days" requires additional parsing steps and therefore complexity in the implementation. Either knowing or calculating how many seconds there are in a day can be expected of anyone touching a project or configuration at this level of detail.
Seconds are also unambiguous. Depending on your chosen definition, "X days" may or may not be influenced by leap seconds and DST changes.
I doubt anyone cares about an hour more or less in this context. But if you want multiple implementations to agree talking about seconds on a monotonic timer is a lot simpler
Could you explain what you mean re: ambiguity? I understand why “calendar units” like months are ambiguous, but minutes, hours, days, and weeks all have fixed durations (which is why APIs like Python’s `timedelta` allows them).
The minute between December 31, 2016 23:59 and January 1st 2017 is 61 seconds, not 60 seconds. The hour that contains that minute is 3601 seconds, the day that contains that hour is 43201 seconds, etc. If you assume a fixed duration and simply multiply by 43200, your math will be wrong compared to the rest of the world.
Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.
That’s what I mean by calendar units. These aren’t issues if you don’t try to apply durations to the “real” calendar.
(This is all in the context of cooldowns, where I’m not convinced the there’s any real ambiguity risk by allowing the user to specify a duration in day or hour units rather than seconds. In that context a day is exactly 24 hours, regardless of what your local savings time rules are.)
"exactly 24 hours" could still be anywhere between 86399 and 86401 seconds, depending on leap seconds. At least if by an hour you mean an interval of 60 minutes, because a minute that contains a leap second will have either 59 or 61 seconds.
You could specify that for the purposes of cooldowns you want "hour" to mean an interval of 3600 seconds. But that you have to specify that should illustrate how ambiguous the concept of an hour is. It's not a useless concept by any means and I far prefer to specify duration in hours and days, but you have to spend a sentence or two on defining which definition of hours and days you are using. Or you don't and just hope nobody cares enough about the exact cooldown duration
Leap seconds are their own nightmare. UNIX time ignores them, btw, so that the unix epoch is 86400*number of days since 1/1/1970 + number of seconds since midnight. The behavior at the instance of a leap second is undefined.
In the UK last Sunday was 23 hours long because we switched to BST, and occasionally leap seconds will result in a minute being something other 60 seconds.
No it wasn't. The country instantaneously changed timezones from UTC+0 to UTC+1 (called something else locally), it was no different to any other timezone change from e.g. physically moving into another timezone.
I came here to argue the opposite. Expressing it in seconds takes away questions about time zones and DST.
I think you're incorrect to say that second are also ambiguous. Maybe what you mean is that days are more practical, but that seems very much a personal preference.
I understand the [flawed] reasoning behind "x seconds from now is going to be roughly now() + x on this particular system", but how does defining the cooldown from an external timestamp save you from dealing with DST and other time shenanigans? In the end you are comparing two timestamps and that comparison is erroneous without considering time shenanigans
that kind of complexity is always worth it. Every single time. It's user time that you're saving and it also makes config clearer for readers and cuts out on "too many/little zeroes on accident" errors
It's just library for handling time that 98% of the time your app will be using for something else.
Workdays! Think about it, if you set the delay in regular days/seconds the updated dependency can get pulled in on a weekend with only someone maybe on-call.
(Hope your timezones and tzdata correctly identifies Easter bank holiday as non-workdays)
In JavaScript something entirely new would be invented, to solve a problem that has long been solved and is documented in 20+ year old books on common design patterns. So we can all copy-paste `{ or: [{ days: 42, months: 2, hours: "DEFAULT", minutes: "IGNORE", seconds: null, timezone: "defer-by-ip" }, { timestamp: 17749453211*1000, unit: "ms"}]` without any clue as to what we are defining.
In Java, a 6000LoC+ ecosystem of classes, abstractions, dependency-injectables and probably a new DSL would be invented so we can all say "over 4 Malaysian workdays"
But you know that Java solution will continue working even after we no longer use the Gregorian Calendar, the collapse and annexation of Malaysia to some foreign power, and then us finally switching to a 4-day work week; so it'd be worth it.
... and since it was architectured to allow runtime injection-patching of events before they hit the enterprise-service-bus, everyone using this library must first set fourteen ENV vars in their profile, and provide a /etc/java/springtime/enterprise-workday-handling/parse-event-mismatch.jar.patch. Which should fix the bug for you.
You can find the patch files for your OSs by registering at Oracle with a J3EE8.4-PatchLibID (note, the older J3EE16-PatchLib-ids aren't compatible), attainable from your regional Oracle account-manager.
And least one of those environment can contain template strings that are expanded with arguments from request headers when run under popular enterprise java frameworks, and by way of the injection patching could hot load arbitrary code in runtime.
A joke should be funny though, not just a dry description of real life, so let's leave it at that. We've already taken it too far.
In before someone thinks it's a joke, the most commonly used logging library in Java had LDAP support in format scripts enabled by default" (which resulted, of course in CVE)
JavaScript Temporal. Not sure knowing what a "workday" is in each timezone is in it's scope but it's the much needed and improved JS, date API (granted with limited support to date)
Don't forget about regional holidays, which might follow arbitrary borders that don't match any of the official subdivisions of the country. Or may even depend on the chosen faith of the worker
…now imagine a list of instruments, some of which have durations specified in days/weeks/months (problems already with the latter) and some in workdays, and the user just told your app to display it sorted by duration.
Nah, working hours and make global assumptions of 0900-1230/1330-1730, M-F, and have an overly convoluted way to specify what working ours actually are in the relevant location(s).
They made a movie to make money. I doubt anyone holding the purse strings cared one iota if that bit were corrected or not. It’s not really a retcon either because they didn’t change anything.
That had more or less been the explanation in the books for decades, and even in George Lucas' notes from 1977:
> It's a very simple ship, very economical ship, although the modifications he made to it are rather extensive – mostly to the navigation system to get through hyperspace in the shortest possible distance (parsecs).
For Star Wars, they retconned it to mean he found the shortest possible route through dangerous space, so even for Han Solo's quote, it's still distance.
To me it sounds safer to have different big infra providers with different delays, otherwise you still hit everyone at the same time when something does inevitably go undetected.
And the chances of staying undetected are higher if nobody is installing until the delay time ellapses.
It's the same as not scheduling all cronjobs to midnight.
About the use of different units: next time you choose a property name in a config file, include the unit in the name. So not “timeout” but “timeoutMinutes”.
Yes!! This goes for any time you declare a time interval variable. The number of times I've seen code changes with a comment like "Turns out the delay arg to function foo is in milliseconds, not seconds".
At that point, you're making all your configuration fields strings and adding another parsing step after the json/toml/yaml parser is done with it. That's not ideal either; either you write a bunch of parsing code (not terribly difficult but not something I wanna do when I can just not), or you use some time library to parse a duration string, in which case the programming language and time library you happen to use suddenly becomes part of your config file specification and you have to exactly re-implement your old time handling library's duration parser if you ever want to switch to a new one or re-implement the tool in another language.
I don't think there are great solutions here. Arguably, units should be supported by the config file format, but existing config file formats don't do that.
> adding another parsing step after the json/toml/yaml parser is done with it. That's not ideal either
I'd argue that it is ideal, in the sense that it's the sweet spot for a general config file format to limit itself to simple, widely reusable building blocks. Supporting more advanced types can get in the way of this.
Programs need their own validation and/or parsing anyway, since correctness depends on program-specific semantics and usually only a subset of the values of a more simply expressed type is valid. That same logic applies across inputs: config may come from files, CLI args, legacy formats, or databases, often in different shapes. A single normalization and validation path simplifies this.
General formats must also work across many languages with different type systems. More complex types introduce more possible representations and therefore trade-offs. Even if a file parser implements them correctly (and consistently with other such parsers), it must choose an internal form that may not match what a program needs, forcing extra, less standard transformation and adding complexity on both sides for little gain.
Because acceptable values are defined by the program, not the file, a general format cannot fully specify them and shouldn’t try. Its role is to be a medium and provide simple, human-usable (for textual formats), widely supported types, avoid forcing unnecessary choices, and get out of the way.
All in all, I think it can be more appropriate for a program to pick a parsing library for a more complex type, than to add one consistently to all parsers of a given file format.
Another parsing step is the common case. Few parameters represent untyped strings where all characters and values are valid. For numbers as well, you often have a limited admissible range that you have to validate for. In the present case, you wouldn’t allow negative numbers, and maybe wouldn’t allow fractional numbers. Checking for a valid number isn’t inherently different from checking for a regex match. A number plus unit suffix is a straightforward regex.
> PSA: npm/bun/pnpm/uv now all support setting a minimum release age for packages.
The solution is not moar toolz. That's the problem—this crazy mindset that the problems endemic to bad tooling have a solution in the form of complementing them with another layer, rather than fewer.
Git and every sane SCM already allow you to manage your source tree without jumping through a bunch of hoops to go along with wacky overlay version control systems like the one that the npmjs.com crew designed, centering around package.json as a way to do an end-run around Git. You don't need to install and deploy anything containing never-before-seen updates just because the NodeJS influencer–developers say that lockfiles are the Right Way to do things. (It's not.)
Opting in to being vulnerable to supply chain attacks is a choice.
Is there a way to do that per repo for these tools ? We all know how user sided configuration works for users (they usually clean it whenever it goes against what they want to do instead of wondering why it blocks their changes :))
Fairly sure every single one has a repo level config that you can add these settings to. Others have pointed out the pnpm and npm, and I bunfig can also be repo level.
min release age to 7 days about patch releases exposes you to the other side of the coin, you have an open 7 days window on zero-day exploits that might be fixed in a security release
The packages that are actually compromised are yanked, but I assume you're talking about a scenario more like log4shell. In that case, you can just disable the config to install the update, then re-enable in 7 days. Given that compromised packages are uploaded all the time and zero-day vulnerabilities are comparatively less common, I'd say it's the right call.
At least with pnpm, you can specify minimumReleaseAgeExclude, temporarily until the time passes. I imagine the other package managers have similar options.
Urgent fix, patch released, invisible to dev team cause they put in a 7 day wait. Now our app is vulnerable for up to 7 days longer than needed (assuming daily deploys. If less often, pad accordingly). Not a great excuse as to why the company shipped an "updated" version of the app with a standing CVE in it. "Sorry we were blinded to the critical fix because set an arbitrary local setting to ignore updates until they are 7 days old". I wouldn't fire people over that, but we'd definitely be doing some internal training.
Because everyone got updates immediately. If the default was 7 days, almost no one would get updates immediately but after 7 days, and now someone only finds about after 7 days. Unless there is a poor soul checking packages as they are published that can alert the registry before 7 days pass, though I imagine very few do that and hence a dedicated attacker could influence them to not look too hard.
If I remember correctly, in all the recent cases it was picked up by automated scanning tools in a few hours, not because someone updated the dependency, checked the code and found the issue.
So it looks like even if no one actually updates, the vast majority of the cases will be caught by automated tools. You just need to give them a bit of time.
If everyone or a majority of people sets these options, then I think issues will simply be discovered later. So if other people run into them first, better for us, because then the issues have a chance of being fixed once our acceptable package/version age is reached.
And when you actually need a super hot fix for a 0-day, you will need to revert this and keep it that way for some time to then go back to minimum age.
While this works, we stillneed a permanent solution which requires a sort of vetting process, rather than blindly letting everything through.
I think my vetting would settle for a repo diff against the previous version, confirming the only difference was the security fix (though that doesn't cover all the bases).
Their analysis was triggered by open source projects upgrading en-masse and revealing a new anomalous endpoint, so, it does require some pioneers to take the arrows. They didn't spot the problem entirely via static analysis, although with hindsight they could have done (missing GitHub attestation).
A security company could set up a honeypot machine that installs new releases of everything automatically and have a separate machine scan its network traffic for suspicious outbound connections.
The problem is what counts as suspicious. StepSecurity are quite clear in their post that they decide what counts as anomalous by comparing lots of open source runs against prior data, so they can't figure it out on their own.
Can you elaborate? Why do you believe that motivated threat hunters won’t continue to analyze and find threats in new versions of open source software in the first week after release?
Attackers going "low and slow" when they know they're being monitored is just standard practice.
> Why do you believe that motivated threat hunters won’t continue to analyze and find threats in new versions of open source software in the first week after release?
I'm sure they will, but attackers will adapt. And I'm really unconvinced that these delays are really going to help in the real world. Imagine you rely on `popular-dependency` and it gets compromised. You have a cooldown, but I, the attacker, issue "CVE-1234" for `popular-dependency`. If you're at a company you now likely have a compliance obligation to patch that CVE within a strict timeline. I can very, very easily pressure you into this sort of thing.
I'm just unconvinced by the whole idea. It's fine, more time is nice, but it's not a good solution imo.
There are many options. Here's a post just briefly listing a few of the ones that would be handled by package managers and registries, but there are also many things that would be best done in CI pipelines as well.
Worth noting this attack was caught because people noticed anomalous network traffic to a new endpoint. The 7-day delay doesn't just give scanners time, it gives the community time to notice weird behavior from early adopters who didn't have the delay set.
It's herd immunity, not personal protection. You benefit from the people who DO install immediately and raise the alarm
But wouldn't the type of people that notifes anomalous network activity be exactly the type of people who add a 7 day delay because they're security conscious?
And I’ll bet a chunk of already-compromised vibe coders are feeling really on-top-of-shit because they just put that in their config, locking in that compromised version for a week.
I suspect most packages will keep a mix of people at 7 days and those with no limit. That being said, adding jitter by default would be good to these features.
This became evident, what, perhaps a few years ago? Probably since childhood for some users here but just wondering what the holdup is. Lots of bad press could be avoided, or at least a little.
While this explicitly calls out "postinstall", I'm pretty sure it affects other such lifecycle scripts like preinstall in dependencies.
The --ignore-scripts option will ignore lifecycle scripts in the project itself, not just dependencies. And it will ignore scripts that you have previously allowed (using the "allowBuilds" feature).
The config for uv won't work. uv only supports a full timestamp for this config, and no rolling window day option afaik. Am I crazy or is this llm slop?
> Define a dependency cooldown by specifying a duration instead of an absolute value. Either a "friendly" duration (e.g., 24 hours, 1 week, 30 days) or an ISO 8601 duration (e.g., PT24H, P7D, P30D) can be used.
This is what tripped me up. I added that config and then got this error:
error: Failed to parse: `.config/uv/uv.toml`
Caused by: TOML parse error at line 1, column 17
|
1 | exclude-newer = "7 days"
| ^^^^^^^^
failed to parse year in date "7 days": failed to parse "7 da" as year (a four digit integer): invalid digit, expected 0-9 but got
I was on version 0.7.20, so I removed that line, ran "uv self update" and upgraded to 0.11.2 and then re-added the config and it works fine now.
Yeah, that error message isn’t ideal on older versions, but unfortunately there’s no way to really address that. But I’m glad it’s working for you on newer versions.
I think it should work at the user config level too:
> If project-, user-, and system-level configuration files are found, the settings will be merged, with project-level configuration taking precedence over the user-level configuration, and user-level configuration taking precedence over the system-level configuration.
Do you know if there is override this specifically when I want to install a security patch? UV just claims that package doesn't exist if I ask for new version
Wouldn't this just be a case of the bear catching one guy and then catching the other guy (especially if the issue was unnoticed altogether after the set number of days)?
The minimum-release-age heuristic is certainly helpful as it theoretically gives the community a chance to identify the issue. Of course, in practice, these things aren't scanned or analyzed the way they should ideally be, which is a deeper issue. Pinning has definitely saved me on more than one occasion, but it doesn't strike at the root of the issue.
Not just as a gateway in a lot cases, but CrewAI and DSPy use it directly. DSPy uses it as its only way to call upstream LLM providers and CrewAI falls back to it if the OpenAI, Anthropic, etc. SDKs aren't available.
Do you feel as if people will update litellm without looking at this discussion/maybe having it be automatic which would then lead to loss of crypto wallets/ especially AI Api keys?
Now I am not worried about the Ai Api keys having much damage but I am thinking of one step further and I am not sure how many of these corporations follow privacy policy and so perhaps someone more experienced can tell me but wouldn't these applications keep logs for legal purposes and those logs can contain sensitive information, both of businesses but also, private individuals perhaps too?
The thing is, this post is hitting a straw man. ngmi culture was deeply toxic and pervasive in crypto. I think the people who are really into LLMs are having a blast.
I'm definitely having a blast, but I agree with the author. You're not going to get left behind, the "getting left behind" rhetoric was just cryptocurrency pump-and-dumpers. It's fine to wait and not engage if you don't want to.
I had an heavy ai user on my team say that “those who learn how ti use the tools wont get fired, those who dont are gone”. I used it to generate a bunch of cfn and it worked fine from an example and a couple line prompt, doesnt seem that hard to learn to me.
Now reviewing the 1k lines it generated and making sure its secure, thats going to take me longer than writing it by hand.
Yeah, I think this is it. If you don't learn to use them, you'll be much slower than people who do, but also they're not really that hard to learn, so it's not super urgent.
I'm still confused about the things I'll be slower in though, and I'm being sincere with that confusion. If it's "boilerplate", then I haven't done enough research or pick a library which has little to none of that, or I'm not using the template(s) built into whatever framework I am using.
For example, in one of the projects I'm working on, I'm using the VSA pattern. I have the list of 50 to 75 features I need to implement and what "categories" they slot into, I have all of the frameworks and libraries picked out, and I have built out "feature templates" with all of the boilerplate setup (I'm reusing these over multiple projects going forward). for each of the features all I need to do is
'ftr new {FEATURE_TYPE} {FEATURE_NAME} {OUTPUT_FOLDER}'
and then plug int the domain specific business logic.
I'll most likely use Claude/Codex/Whatever to write out some of my tests, but the majority of the 'boilerplate' is already done and I'm just sorting out the pieces that matter / can't be automated.
Am I missing something huge with these tools?
Don't get me wrong, for doing reverse engineering they're great helpers and I've made a tonne of progress on projects that had been languishing.
I find that can write features 5-10x faster with these tools than by hand, at a comparable level of quality (though it hasn't been long enough for me to judge what'll happen in a year).
Would you be able to give an example of a feature? For my example, I need to query an ancient undocumented database , pull back a pile of data, do some validations on it, and then show it to the user or pass it along with another processing step. The human part is researching the database and the data living in it, and implementing the validation(s) while talking to a business user, everything else can be templated.
Oh yes, this is what LLMs excel at. Introspecting a database, either the schema or the live data, running a few checks to see whether all the data had the same shape (or how many different shapes it has), writing validations to catch edge cases, they do this extremely quickly and pretty accurately, whereas it would have taken me hours of trawling.
Then I can look at the output and say things like "what if the data is lowercase?" or anything else I suspect they may have missed. A few rounds of these and I have a pretty good feel for the quality of the resulting checks, while taking a few minutes of my attention/tens of minutes of wallclock time to do.
You're not actually doing engineering if you're just vibe-coding, reviewing, and testing all the way down. What the hell is that? Just a weird simulacrum of software development that will break apart in unpredictable ways. Security consultants are going to have very lucrative careers in the coming years.
If I don't have experience with the underlying framework/language/thing being modified, it makes it quite difficult to trust the actual review. In this example, I haven't worked heavily with Cloudformation, so I can't call b.s if it leaves a database instance exposed to the wider public internet rather than in my company's private VPC.
You can ask the agent to check that it doesn't leave a database instance exposed to the public, and present you with proof for you to check (references to the code and the relevant Cloudformation documentation). Then repeat this for all the things you'd normally want to check for in a code review.
In that case I'm just moving the reading of the documentation from reading it as I'm writing the yaml to when I'm doing a code review. Not saying it isn't helpful to have a pair researcher, just seems like I'm moving things around .
The llm person having a blast is compelled to push everyone to see what they see. If they have a leadership role at their company, then the getting-left-behind drum does get banged in the form of "ai native company transformation" initiatives.
Lots, and not just online. I run into them regularly in my office, and so do my friends and family in tech. One of my coworkers is now spending all his time writing SKILLs, he's convinced that we'll never need to solve operational issues again if we have the right SKILLs.
I'm not worried about being left behind technologically, but I am worried about being left behind after every company on the planet decides we need N years experience in AI to be employable.
I love the Afroman story so much. Everything about it.
It does more to expose just how incompetent, entitled and corrupt the average cop really is, something I wish was better known. The cops who brought this suit are basically the biggest crybabies, are too dumb to realize it and too entitled to realize that others wouldn't see it that way. It's fantastic.
Compare this to policing in Japan [1]:
> Koban cops go to extraordinary lengths to learn their beats. They're required to regularly visit every business and household in their districts twice a year, ostensibly to hand out anti-crime flyers or ask about their security cameras. The owner of a coffee shop told Craft, "With Officer Sota, we can say what's on our mind. He's really like a neighbor. Instead of dialing emergency when we need help, we just call him."
American cops are a gang, by and large.
Cops have absolutely massive budgets, from small towns to big cities. Let's not forget Uvalde, where the police department budget was ~40% of the city budget and it resulted in 19 cops standing outside scared while one shooter kept shooting literal children for an hour. Because they were scared.
> Let's not forget Uvalde, where the police department budget was ~40% of the city budget and it resulted in 19 cops standing outside scared while one shooter kept shooting literal children for an hour. Because they were scared.
Not only did they not stop the shooter, but they actively prevented parents—who were willing to risk their lives—from intervening. They didn't just not help, they proactively ran interference for the shooter.
Uvalde should’ve been the nail in the coffin for the thin blue liners. Turns out, not only do they not have a problem with cops murdering black people, they also have no problem with them standing by while their own kids are shot inside a school.
Hell of a death cult this country has turned into.
Uvalde was the moment my hopes for the future of this country died. The parents there voted to reelect the same people who stood around and let a solitary madman murder their children just a little while later.
To be fair, OK is rock bottom in things like literacy rankings and likely income as well. Something like that would not have gone over will in New Jersey. US is a patchwork quilt of very different places.
Even if this were true, are we really so far gone in the political discourse that we can't appreciate a win as Americans because of the perceived political orientation of the recipient?
I think the answer is yes, but I still naively hold out hope that we can eventually move beyond this.
I didn't say I don't appreciate a win. I was saying that Afroman isn't a crusader for freedom or anything of the sort. We can appreciate the win without glorifying the victim of the case beyond the scope of the case.
I think he tried to be the Libertarian presidential candidate and made a song about Hunter Biden, so people just assume he's full MAGA and doesn't deserve 1st amendment rights
I mean, both him and Trump have similar approach to opponents or those who wronged them. In this case, the opponents are deeply unsympathetic to most, so it is harder to see.
I do see how someone whose reaction to being wronged is "I fucked his wife doggy style" could be attracted to the Donald Trump personality.
He’s willing to publicly criticize government corruption and child abuse, so there’s no way MAGA would accept him. (Both these stances came up in the defamation lawsuit and in the music video.)
When he ran for president in 2024, he registered as an independent, “citing inflation, the housing market, law enforcement corruption, and legalizing marijuana as key campaign issues”.
Even if he is ultra right wing on secondary issues (I have no idea) those are all anti-MAGA or bipartisan stances.
The problem isn't finding people who will publicly criticize child abuse and government corruption. The problem is finding people who actually will do something about those specific problems and not find some way to enrich themselves (or their supporters) in their name instead.
It's under the purview of the executive branch to determine drug scheduling
> The term "list I chemical" means a chemical specified by regulation of the Attorney General ... until otherwise specified by regulation of the Attorney General
That's good to know. The second sentence you gave was superfluous because the first one told us that -- I add this not to dismiss your efforts but to highlight them.
Buried under a lot of complaints about fairness, Kagi mentions they are still using Google search results, just without authorization.
> Because direct licensing isn't available to us on compatible terms, we - like many others - use third-party API providers for SERP-style results (SERP meaning search engine results page).
So in one sense, yes, Kagi isn't quite Google, but in another sense, it very much still is.
Multiple layers of curation works really well. Specifically, using HN as a curation layer for kagi's small web list. I implemented this on https://hcker.news. People who have small web blogs should post them on HN, a lot of people follow that list!
I also have `ignore-scripts=true` in my ~/.npmrc. Based on the analysis, that alone would have mitigated the vulnerability. bun and pnpm do not execute lifecycle scripts by default.
Here's how to set global configs to set min release age to 7 days:
(Side note, it's wild that npm, bun, and pnpm have all decided to use different time units for this configuration.)If you're developing with LLM agents, you should also update your AGENTS.md/CLAUDE.md file with some guidance on how to handle failures stemming from this config as they will cause the agent to unproductively spin its wheels.
reply