jj describe gives a name to a commit. In jj, everything rewrites the history, so there's no real point in calling it out in the command name since it's just the default behavior.
It's not true, in that sense. Commits in jj are basically the same as commits in git as far as mutability is concerned. But in jj you normally work with changes, rather than commits, and open changes are mutable (by altering which immutable commit they point to in the backing store). And there is effectively an append-only audit trail of these alterations (which is what makes `jj undo`/`jj redo` simple).
Some comments here are confusing the issue by saying ‘commit’ when they mean ‘change’ in the jj sense.
Re the grandparent comment, `jj describe` provides a change description, analogous to `git commit --amend --edit` in git terms.
it is true. some history is marked immutable by default; in git, everything is mutable by default and you have to add branch protection on the server side. (granted, you can change what is immutable in jj relatively easily, so you shouldn't ignore branch protection if you're using jj exclusively with a git repo, either.)
Europe has a major distro in the form of SUSE, so that’s not too worrying.
Even if upstream linux banned european contributors, there are enough european contributors that a fork would just emerge. So I’m really not too worried about that happening.
1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.
2. Scan the DOM, look for nodes containing "chrome-extension://" within them (for instance because they link to an internal resource)
It's pretty obvious why the second one works, and that "feels alright" - if an extension modifies the DOM, then it's going to leave traces behind that the page might be able to pick up on.
The first one is super problematic to me though, as it means that even extensions that don't interact with the page at all can be detected. It's unclear to me whether an extension can protect itself against it.
> 1. Do a request to `chrome-extension://<extension_id>/<file>`. It's unclear to me why this is allowed.
Big +1 to that.
The charitable interpretation is that this behavior is simply an oversight by Google, a pretty massive one at that, which they have been slow to correct.
The less-charitable interpretation is that it has served Google's interests to maintain this (mis)feature of its browser. Likely, Google or its partners use similar to techniques to what LinkedIn/Microsoft use.
This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.
The more-fully-open-source Mozilla Firefox browser seems to have had no difficulty in recognizing the issues with static extension IDs and randomizing them since forever (https://harshityadav.in/posts/Linkedins-Fingerprinting), just as Firefox continues to support ManifestV2 and more effective ad-blocking, with no issues.
> This would be in the same vein as Google Chrome replacing ManifestV2 with ManifestV3, ostensibly for performance- and security-related purposes, when it just so happens that ManifestV3 limits the ability to block ads in Chrome… the major source of revenue for Google.
uBlock Origin Lite (compatible w/ ManifestV3) works quite well for me, I do not see any ads wherever I browse.
The mv3 problem was never about "does it work now". It was about "can it keep up". Ad blocking is a cat and mouse game, and the mouse is kneecapped now. You're being slow boiled.
Well said. I'm glad that as blockers have managed to develop effective approaches under Mv3, but it took a tremendous amount of engineering effort that was only necessary because Google was trying to impose these very large costs on them.
These are web accessible resources, e.g. images and stylesheets you can reference in generated HTML. Since content scripts operate directly on the same DOM, it’s unclear how you can tell an <img> or <link> came from the modification of a content script or a first party script. You might argue it’s possible to block these in fetch(), but then you also need to consider leaks in say Image’s load event.
This behavior has been improved in MV3, with option to make the extension id dynamic to defeat detection:
> Note: In Chrome in Manifest V2, an extension's ID is fixed. When a resource is listed in web_accessible_resources, it is accessible as chrome-extension://<your-extension-id>/<path/to/resource>. In Manifest V3, Chrome can use a dynamic URL by setting use_dynamic_url to true.
For widget style services:
If you need the functionality of an extension to operate, then you can check if it's already installed so you don't ask to install it again.
This is better than forcing the extension to announce it's presences on every web site.
> NSA most certainly has a backdoor there and such complete access to any Android phone.
Citation needed?
> This was common knowledge after the Snowden stuff.
Not to me, it isn't? As far as I'm aware, most of the Snowden stuff were centered around PRISM, which allowed widescale wiretapping of internet backbone, as well as agreements with big cloud providers to allow tapping into their data.
I haven't seen anything indicating that there was widespread compromise of personal computing devices at such a deep level of the root of trust. I haven't seen any indication that the NSA has a backdoor in the earlyboot CPU of any device, whether that is the Qualcomm boot processor, the Intel Management Engine or the AMD Platform Security Processor (which all have similar capabilities and hidden firmware).
If I missed anything/have links to research into these backdoors, I'd like to see them!
There is _some amount_ of justification to ban TXT. There have been a few cases of C2 servers using DNS to send instructions to malware, so letting TXT slip through the cracks would still allow for that.
Now whether this downside justifies the massive problem it causes on false positives...
TXT can't be banned. There are several RFCs that require TXT records, such as DKIM configuration, DMARC configuration, and it is extensively used for verification by things like AWS SES, Microsoft Office, and all kinds of things. It's built into many standards and used by all kinds of other entities for all kinds of perfectly legitimate things.
Did you read my reply without reading the parent I was replying to? I’m talking about not allowing a blocked domain from being able to add new TXT entries as the parent was suggesting. Of course TXT shouldn’t be banned entirely…
> The best solution is skin-in-the-game, for-profit enterprise coupled with rigorous antitrust enforcement.
Don't we have enough examples showing that this simply cannot work long-term, because the for-profit enterprises will _inevitably_ grow larger than the government can handle through antitrust? And once they reach that size, they become impossible to rein in. Just look at all the stupid large american corporations who can't be broken up anymore because the corporation has the lobbying power and media budget to make any attempt to enforce antitrust a carrier killer for a politician.
I think it's very myopic to say that corporate structure is the "best solution".
It seems like you have an unfalsifiable belief. If one side raises more money and wins, it because of the money. If one side raises more money and loses, it is still the money because the other side spend it more effectively.
And the fact that a 3rd party supports an opponent does not kill any politician's career. Biden retired by himself, following his own party's pressure. And Harris is still around, I believe.
To be fair, that seems to be where some of the IA lawsuits are going. The argument goes that the models themselves aren't derivative works, but the output they produce can absolutely be - in much the same way that reproducing a book from memory could be copyright violation, trademark infringement, or generally go afoul of the various IP laws.
Says who? You can totally do code reuse using manually-written dynamic dispatch in "rust without traits". That's how C does it, and it works just fine (in fact, it's often faster than Rust's monomorphic approach that results in a huge amount of code bloat that is often very unfriendly to the icache).
Granted, a lot of safety features depend on traits today (send/sync for instance) but traits is a much more powerful and complex feature than you need for all of this. It seems to me like it's absolutely possible to create a simpler language than Rust that retains its borrow checker and thread safety capabilities.
Now whether that'd be a better language is up to individual taste. I personally much prefer Rust's expressiveness. But not all of it is necessary if your goal is only "get the same memory and thread safety guarantees".
> Says who? You can totally do code reuse using manually-written dynamic dispatch in "rust without traits". That's how C does it, and it works just fine.
Rust can monomorphize functions when you pass in types that adhere to specific traits. This is super-handy, because it avoids a bounce through a pointer.
The C++ equivalent would be a templated function call with concept-enforced constraints, which was only well-supported as of C++20 (!!!) and requires you to move your code into a header or module.
Zig can monomorphize with comptime, but the lack of trait-based constraint mechanism means you either write your own constraints by hand with reflection or rely on duck typing.
C doesn't monomorphize at all, unless you count preprocessor hacks.
At what point did they make it _worse_? Tailwind didn't remove any existing functionality here. What they did was refuse to merge a PR while they're trying to figure out how to navigate a difficult financial problem, all while being fully transparent about what's going on, and saying that they're open to merging the PR if/when they manage to get things together.
This is very different from, say, the minio situation, where they were actively removing feature before finally closing development down entirely. Whether tailwind will end up going down this route, time will tell. But as of right now, I find this reading to be quite uncharitable.
It's not even funcationality to the library code, it's a PR to their docs. If you just want optimized docs for your LLM to consume, isn't that what [Context7](https://context7.com/websites/tailwindcss) already has? Why force this new responsibility to the maintainer.
reply