Hacker Newsnew | past | comments | ask | show | jobs | submit | godelski's commentslogin

A warrant usually isn't a free pass to search everything. They are often narrow.

The warrant is the receipt. Even if you believe it's fine most of the time I'm pretty certain most people would feel uncomfortable if they went to the grocery store and weren't offered one. You throw it away most of the time, but have you never needed it? Mistakes happen.

The stakes are a lot higher here. The cost of mistakes are higher. The incentives for abuse are higher. The cost of abuse is lower.

And what's the downside of the person being searched having the warrant? Why does it need to be secret?


You used a conditional so I assume you also know how such a system can fail. It's not hard to figure out how that can be exploited, right? You can't rely on that conditional being executed perfectly every time, even without adversarial actors. But why ignore adversarial actors?

Honestly I think we're just becoming more aware of this way of thinking. It's certainly exacerbated it now that everyone has "an expert" in their pocket.

It's no different than conspiracy theorists. We saw a lot more with the rise in access to the internet. Not because they didn't put in work to find answers to their questions, but because they don't know how to properly evaluate things and because they think that if they're wrong then it's a (very) bad thing.

But the same thing happens with tons of topics, and it's way more socially acceptable. Look how everyone has strong opinions on topics like climate, rockets, nuclear, immigration, and all that. The problem isn't having opinions or thoughts, but the strength of them compared to the level of expertise. How many people think they're experts after a few YouTube videos or just reading the intro to the wiki page?

Your PM is no different. The only difference is the things they believed in, not the way they formed beliefs. But they still had strong feelings about something they didn't know much about. It became "their expert" vs "your expert" rather than "oh, thanks for letting me know". And that's the underlying problem. It's terrifying to see how common it is. But I think it also leads to a (partial) solution. At least a first step. But then again, domain experts typically have strong self doubt. It's a feature, not a bug, but I'm not sure how many people are willing to be comfortable with being uncomfortable


I'm not Canadian, but it seems similarly written to how laws in the US have been exploited to be used to spy on Americans. And despite not being Canadian, as an American I have a horse in this race, as the OP notes...

  | many of these rules appear geared toward global information sharing
I see a lot of people arguing that these bounds are reasonable so I want to make an argument from a different perspective:

  Investigative work *should* be difficult.
There is a strong imbalance of power between the government and the people. My little understanding of Canadian Law suggests that Canada, like the US, was influenced by Blackstone[0]. You may have heard his ratio (or the many variations of it)

  | It is better that ten guilty persons escape than that one innocent suffer.
What Blackstone was arguing was about the legal variant of "failure modes" in engineering. Or you can view it as the impact of Type I (False Positive) and Type II (False Negative) errors. Most of us here are programmers so this should be natural thinking: when your program fails how do you want it to fail? Or think of it like with a locked door. Do you want the lock to fail open or closed? In a bank you probably want your safe to fail closed: the safe requires breaking into to access again. But in a public building you probably want it to fail open (so people can escape from a fire or some other emergency that is likely the reason for failure).

This frame of thinking is critical with laws too! When the law fails how do you want it to fail? So you need to think about that when evaluating this (or any other) law. When it is abused, how does it fail? Are you okay with that failure mode? How easy is it to be abused? Even if you believe your current government is unlikely to abuse it do you believe a future government might? (If you don't believe a future government might... look south...)

A lot of us strongly push against these types of measures not because we have anything to hide nor because we are on the side of the criminals. We generally have this philosophy because it is needed to keep a government in check. It doesn't matter if everyone involved has good intentions. We're programmers, this should be natural too! It doesn't matter if we have good intentions when designing a login page, you still have to think adversarially and about failure modes because good intentions are not enough to defend against those who wish to exploit it. Even if the number of exploiters is small the damage is usually large, right?

This framework of thinking is just as beneficial when thinking about laws as it is in the design of your programs. You can be in favor of the intent (spirit of the law), but you do have to question if the letter of the law is sufficient.

I wanted to explain this because I think it'll help facilitate these types of discussions. I think they often break down because people are interpreting from very different mental frameworks. Disagree with me if you want, but I hope making the mental framework explicit can at least improve your arguments :)

[0] https://en.wikipedia.org/wiki/Blackstone%27s_ratio


> A lot of us strongly push against these types of measures not because we have anything to hide nor because we are on the side of the criminals.

I had this view as well until I realized it’s predicated on living in a high trust society. At some point you reach a critical mass of crime that is so rampant, and the rule of law has so broken down that it’s basically Mad Max out there, and then these idealistic philosophies start to fall apart.

You can look to parts of SE Asia or the Middle East to see some examples where that happened, and where it was eventually reigned in with extreme measures (Usually broad and indiscriminate capital punishment).

I know your comment is about fixing failure modes in the legal system, and I’m not defending government surveillance, or the idea of considering someone innocent until proven guilty, but what happens when the entire system fails due to misplaced idealism? Much worse things are waiting on the other end of the spectrum when people don’t feel like the government is adequately protecting them.


I think a practical argument against what you're saying here is simply that solving the mad max stuff doesn't require anything at all like this. The type of crime that's scary and impactful (e.g. terrorism is scary, but so extremely rare that it can't really be considered impactful) is generally trivial to bust.

Are you of the opinion that peoples' default state is a Mad Max-like existence?

The question isn't about idealism or the realistic possibility of said idealism. The question, in my opinion, is whether we can only succeed as a species if a small number of people are entrusted with creating and enforcing laws by force when necessary.

That isn't to say we never need some level of hierarchy or that laws, social norms, etc aren't important. Its to say that we need to keep a tight reign on it and only push authority and enforcement up the ladder when absolutely necessary.

It will end poorly if we continue down the road of larger and larger governments under the fear of Mad Max, and this idea many people have that "someone has to be in charge."


>I had this view as well until I realized it’s predicated on living in a high trust society.

Building down these high trust scenarios has been the consequence of active policies. You don't just miss these trends and correlations. Not to this extent.


The Mad Max stuff is occurring at scale more due to unchecked governments, and governments that don't work for society than it is from insufficient surveillance

>I had this view as well until I realized it’s predicated on living in a high trust society. At some point you reach a critical mass of crime that is so rampant, and the rule of law has so broken down that it’s basically Mad Max out there, and then these idealistic philosophies start to fall apart.

I see "High Trust Society" so much as a weird racist dogwhistle, but feel free to disabuse me of that notion.

I live in an extremely high crime area. Because cops abuse the law to keep their numbers up. If someone checked they would see that my local McDonalds car park is one of the biggest crime hotspots in the country because of administrative detections made on minor drug deals there.

It just so happens that my area is also where the government dumps migrants, refugees and poor people. Its also the case that they test welfare changes here.

I haven't had a single incident here in 6 years. We often forget to lock our doors. My wife takes my toddler walking around the neighborhood at night. I wave hello to the guy across the road who I have like 99% certainty is dealing drugs (Or just has a lot of friends with nice cars who visit to see how long it has been since he trimmed his lawn).

That said, if you turn on the tv 2 things are apparently happening. 1. We are under attack by hordes of immigrants tearing the country apart. 2. We are under attack by kids on ebikes mowing kids down in a rampage of terror.

Politicians, in order to be seen to be doing things, bring laws in to counter these threats. People bash their chests and demand more be done.

But the issue is that its just not happening. My suburb is great. The people are generally lovely, even those in meth related occupations.

When you complain about the trustiness of the society, consider that your lack of trust might actually be the problem? Nothing is necessarily going to break down because you didnt make your neighbors life worse by supporting another dumb as shit law. "Oh no crime is so rampant" buddy you need to get over yourself. Societies don't fail because of socially defined Crime they fail because people prioritise their perceived safety over everyones freedom.

> I’m not defending government surveillance, or the idea of considering someone innocent until proven guilty

Exactly what you are defending.

>what happens when the entire system fails due to misplaced idealism?

Its at threat from the idealism that you can just pass one more law to fix society.

>don’t feel like the government is adequately protecting them.

They come up with a bunch of dumbshit laws like the OP. Thats the result.


Re: High trust society general means people are pointing to some implicit unwritten structures that stop something from happening.

Collective notions of shame, actual networks of friends and families that reinforce correct behaviour or issue corrections.

Think about simply how credit networks form and function. And why visiting a food truck or medieval travelling doctor for your vial of ointment is different from buying special products from a brick and mortar establishment.

Basically if you or the network has a harder time back propagating defaults and bad credit in a way that prevents future bad outcomes then that is a loss of high trust.

This isn't about race really unless you are operating at the level of some biological or genetic connection to behaviour ... But that is a pretty strange place to be as there a whole host of confounding factors that are much more obvious and believable and I cast serious doubt that even a motivated racist would ever credibly be able to do empirical studies showing causal links between any given genetic population cluster and the emergent societal behaviour. These are such high dimensional systems it just seems insane to even think one could measure this effect.

The invisible substrate is the society unfortunately ... And we are all bad at writing it down and measuring it.


  > until I realized it’s predicated on living in a high trust society.
I don't think it's predicated on that. It's based on low trust of authority. Not necessarily even current authority. And low trust of authority is not equivalent to high trust in... honestly anything else.

  > You can look to parts of SE Asia or the Middle East to see some examples where that happened
These are regions known for high levels of authoritarianism, not democracy, not anarchy (I'm not advocating for anarchy btw). These regions often have both high levels of authoritarianism AND low levels of trust. Though places like China, Japan, Korea etc have high authoritarianism and high trust (China obviously much more than the other two).

  > but what happens when the entire system fails due to misplaced idealism?
It's a good question and you're right that the results aren't great. But I don't think it's as bad as the failure modes of high authoritarian countries.

High authority + low trust + abuse gives you situations like we've seen in Russia, Iran, North Korea. These are pretty bad. The people have no faith in their governments and the governments are centered around enriching a few.

High authority + high trust + abuse is probably even worse though. That's how you get countries like Nazi German (and cults). The government is still centered around enriching a few but they create more stability by narrowing the targeting. Or rather by having a clearer scale where everyone isn't abused ad equally. (You could see the famous quotes by a famous US president about keeping the white population in check by making them believe that at least they're not black)

None of the outcomes are good but I think the authoritarian ones are much worse.

  > when people don’t feel like the government is adequately protecting them.
But this is also different from what I'm talking about. You can have my framework and trust your government. If you carefully read you'll find that they are not mutually exclusive.

The road to hell is paved with good intentions, right? That implies that the road to hell isn't paved just by evil people. It can be paved even by good well intentioned ones. Just like I suggested about when programming. We don't intend to create bugs or flaws (at least most of us don't), but they still exist. They still get created even when we're trying our hardest to not create them, right? But being aware that they happen unintentionally helps you make fewer of them, right? I'm suggesting something similar, but about governments.


This and the previous post is well thought out, thank you for the clarity.

"He who gives up a little freedom for security deserves neither"

I never understood this quote. I happily gave up the freedom of driving without a seatbelt for security, what does that say about me?

Exactly nothing because you can release the seat belt yourself.

It's about giving up freedoms you might never get back, because it's not your decision anymore after giving them up.


It's become more a shorthand for saying much more. Though the original context differs from how it is used today (common with many idioms).

People do not generally believe a seat belt limits your liberty, but you're not exactly wrong either. But maybe in order to understand what they mean it's better to not play devil's advocate. So try an example like the NSA's mass surveillance. This was instituted under the pretext of keeping Americans safe. It was a temporary liberty people were willing to sacrifice for safety. But not only did find the pretext was wrong (no WMDs were found...) but we never were returned that liberty either, now were we?

That's the meaning. Or what people use it to mean. But if you try to tear down any saying it's not going to be hard to. Natural languages utility isn't in their precision, it's their flexibility. If you want precision, well I for one am not going to take all the time necessary to write this in a formal language like math and I'd doubt you'd have the patience for it either (who would?). So let's operate in good faith instead. It's far more convenient and far less taxing


The quote refers to a Faustian bargain offered by the Penn's. They'd bankroll securing a township, as long as the township gave up the ability to tax them. The quote points out that by giving up the liberty to tax, for short term protection, ultimately the township would end up having neither the freedom to tax to fund further defense, or long term security so might as well hold onto the ability to tax and just figure out the security issue.

Moral: don't give up freedoms for temporary gains. It never balances out in the end.


You dont deserve either.

The issue I have with this quote is that it implies that some people deserve freedom and others do not.

I think a better way to phrase it would be:

> he who gives up a little freedom for a little security ends up with neither


People are let go off all the time. Not because of the law but because who needs the work of chasing and punishing every law breaker in the land. In your own workplace,family and friend circle, count how many times you have seen some one do something dumb(forget illegal) that has caused a loss or pain to some one else. And then count how many times you have done something about it.

I use the speed chime in my Model 3 car to alert me if I'm more than 2 km/h over the posted speed limit, which it infers from its database with the autopilot camera providing overrides.

If I'm over that when passing a speed camera in Victoria, AUS, I'll be pinged with a decent fine to arrive shortly.

Imagine if instead of a chime I got fined every single time, everywhere? All this new monitoring makes it a bit like that, at an extreme. I don't want to live in such a society.


There were two commenters that responded 15 minutes prior to your comment. I'd suggest starting there if you want to understand. Then if you disagree with those, you can comment and actually contribute to the conversation ;)

  > I notice that your comment history is all rapid-fire three-paragraph LLM responses
I looked after you said this and those are all from today, in the last hour. And is a stark change from their (very short) comment history.

In particular these two comments are extremely suspicious[0,1]. I think even if not LLM generated I highlights something likely wrong, which paseante themselves states!

  >> a long, detailed response in Slack implied the person had spent time thinking
There's 2 minutes between these comments, on different threads (I also noticed they did similar things in a few threads as I typed this out). While the timing is reasonable for the amount of words written it does not seem adequate for reading the article and/or other comments. Personally, I find that kind of behavior rude as it enshitifies the social space the rest of us are in[2].

[0] https://news.ycombinator.com/item?id=47392999

[1] https://news.ycombinator.com/item?id=47393012

[2] https://news.ycombinator.com/item?id=47393465


The parent didn't say "there's no legitimate uses of eval", they said "using eval should make people pay more attention." A red flag is a warning. An alert. Not a signal saying "this is 100% no doubt malicious code."

Yes, it's a red flag. Yes, there's legitimate uses. Yes, you should always interrogate evals more closely. All these are true


I'm not a JS person, but taking the line at face value shouldn't it to nothing? Which, if I understand correctly, should never be merged. Why would you merge no-ops?

Here's the big reason GitHub should do it:

  It makes the product better
I know people love to talk money and costs and "value", but HN is a space for developers, not the business people. Our primary concern, as developers, is to make the product better. The business people need us to make the product better, keep the company growing, and beat out the competition. We need them to keep us from fixating on things that are useful but low priority and ensuring we keep having money. The contention between us is good, it keeps balance. It even ensures things keep getting better even if an effective monopoly forms as they still need us, the developers, to make the company continue growing (look at monopolies people aren't angry at and how they're different). And they need us more than we need them.

So I'd argue it's the responsibility of the developers, hired by GitHub, to create this feature because it makes the product better. Because that's the thing you've been hired for: to make the product better. Your concern isn't about the money, your concern is about the product. That's what you're hired for.


I'd say that this is also true from a money-and-costs-and-value perspective. Sure, all press is good press... but any number of stakeholders would agree that "we got some mindshare by proactively protecting against an emerging threat" is higher-ROI press than "Ars did a piece on how widespread this problem is, and we're mentioned in the context of our interface making the attack hard to detect."

And when the incremental cost to build a feature is low in an age of agentic AI, there should be no barrier to a member of the technical staff (and hopefully they're not divided into devs/test/PM like in decades past) putting a prototype together for this.


I agree and think it's extra important when you have specialized products. Experts are more sensitive to the little things.

Engineers and developers are especially sensitive. It's our job to find problems and fix them. I don't trust engineers that aren't a bit grumpy because it usually means they don't know what the problems are (just like when they don't dogfood). Though I'll also clarify that what distinguishes a grumpy engineer from your average redditer is that they have critiques rather than just complaints. Critique oriented is searching for solutions of problems, you can't just stop at problem identification.

  > And when the incremental cost to build a feature is low in an age of agentic AI
I'm not sure that's even necessary. A very quick but still helpful patch would be to display invisible characters. Just like we often do with whitespace characters. The diff can be a bit noisier and it's the perfect place for this even if you purposefully use invisible characters in your programming environment.

Though we're also talking about an organization that couldn't merge a PR for a year that fixed a one liner. A mistake that should never have gotten through review. Seriously, who uses a while loop counter checking for equality?!? I'm still convinced they left the "bug" because it made them money


>Though we're also talking about an organization that couldn't merge a PR for a year that fixed a one liner. A mistake that should never have gotten through review. Seriously, who uses a while loop counter checking for equality?!? I'm still convinced they left the "bug" because it made them money

What is this in reference to? I tried to search for it but only found this comment. “Github while loop fix that was in review for a year”?


It was the safe_sleep function. Here's an issue on it [0]. IIRC there was an early issue, but really this is code that never should have made it in. Here's the conditional in question

  SECONDS=0
  while [[ $SECONDS -lt $1 ]]; do
      :
  done
Here's the fix... (s/!=/-lt)

  while [[ $SECONDS -lt $1 ]];
It's a fallback sleep function for if you don't have the sleep command, (or read, or ping) then it'll increment SECONDS (special variable) until the time has passed because : does nothing (it will peg your CPU though).

Problem is the loop isn't computed with infinite precision. Doesn't take a genius to figure out < is infinitely better than != here and you'd be right to guess that people did in fact waste thousands of dollars getting stuck in infinite loops that were entirely unavoidable.

Here's the actual merge...[1]

At least it didn't take them months to merge this line, which should have existed from day 1 too (a very very well known pattern for writing bash scripts)[2]

[0] https://github.com/actions/runner/issues/3792

[1] https://github.com/actions/runner/pull/3157/changes

[2] https://github.com/meshtastic/firmware/pull/7922


FYI, in your reproduction, both of the conditionals are the same. But you are right, the initial implementation was `!=`

    while [[ $SECONDS != $1 ]]; do
became

    while [[ $SECONDS -lt $1 ]]; do

Sure, but looking at this from a purely business perspective - I wonder how many customers would panic or jump ship rather than be grateful when notified of an attack. But I think it could work as an optional feature for paid accounts if it was marketed properly

At the end of the day it boils down to putting your users first.

Making the product better generally stems from acting in their interest, honing the tool you offer to provide the best possible experience, and making business decisions that respect their dignity.

Your comment talks a lot about product and I agree with it, I just mentioned this so we don't lose sight of the fact this is ultimately about people.


Tldr: Yeah it would make it better!

I hope I left the lead as the lead.

But I also think we've had a culture shift that's hurting our field. Where engineers are arguing about if we should implement certain features based on the monetary value (which are all fictional anyways). But that's not our job. At best, it's the job of the engineering manager to convince the business people that it has not only utility value, but monetary.


> Your concern isn't about the money, your concern is about the product. That's what you're hired for.

According to whom? Certainly not the people did the hiring.

I somewhat agree that developers should optimize for something other than pure monetary value, but it has nothing to do with the hiring relationship, just the moral duty to use what power you have to make the world better. In general, this can easily conflict with "what you're hired for."

In this case I think showing suspicious (or even all) invisible Unicode in PRs is even a monetarily valuable feature, so the moral angle is mostly moot. And I would put the primary moral burden primarily on the product management either way, since they're the ones with the most power to affect the product, potentially either ordering the right thing to be done or stopping the devs when they try to do it on their own.


  > According to whom? Certainly not the people did the hiring.
Actually yes, according to them. Maybe they'll say that you should also be concerned about the money but that just makes the business people redundant now doesn't it? So is it better if I clarify and say that the product is your primary concern?

As a developer you have a de facto primary concern with the product. They hire you to... develop. They do not hire you to manage finances, they hire you to manage the product. Doing both is more the job of the engineering manager. But as a developer your expertize is in developing. I don't think this is a crazy viewpoint.

You were hired for your technical skills, not your MBA.

  > In this case I think showing suspicious (or even all) invisible Unicode in PRs is even a monetarily valuable feature
I agree. Though I also think this is true for many things that improve the product.

Also note that I'm writing to my audience.

  >> but HN is a space for developers, not the business people.
How I communicate with management is different, but I'm exhausted when talking to fellow developers and the first question being about monetary value. That's not the first question in our side of things. Our first question is "is this useful?" or "does this improve the product?" If the answer is "yes" then I am /okay/ talking about monetary value. If it's easy to implement and helps the product, just implement it. If it requires time and the utility is valuable then yes, it helps to formulate an argument about monetary value since management doesn't understand any other language, but between developers that is a rather crazy place to start out (unless the proposal is clearly extremely costly. But then say "I don't think you'd ever convince management" instead of "okay, but what is the 'value' of that feature?"). If I wanted to talk to business people I'd talk to the business people, not another developer...

They might say that your job is to make the product "better", and they might even think they mean it, but I think in practice you'll find that their definition of "better" as it relates to products is pretty closely related to money, and further that they are the authorities on what makes the product "better" so you should shut up and do what they say. If you want to make the product actually better, you're going to have to defy them occasionally. That's not what you were hired for, that's just being a human with principles.

To be frank, I tried to address your point with my comment about the audience.

I very much disagree that you start with money and work backwards to technical problems. I do not think this approach would make you efficient at solving problems nor at increasing profits for the business.

And I still firmly believe they need us more than we need them. At the end of the day this is why they want AI coding agents to work out but I do not think that even in the best situation we'll end up in any different of a situation than COBOL. You can make developers more efficient, but replacing them requires an entirely different set of skills.

An MBA-type, with no programming background, has a better chance getting their photos taken with their iPhone in a museum than they do replacing a developer. I'm sure there will be some successful at it, but exceptions do not define the rule.


Talking about the audience completely misses my point. I'm not saying it's good to start with money and work back. I'm saying that's what companies actually do, and furthermore that's something the "dev audience" should understand about their employers.

> I do not think this approach would make you efficient at solving problems nor at increasing profits for the business.

If optimizing for profit doesn't result in profit, it's not the fault of the goal. That company was just incompetent. However many companies are, in fact, moderately competent, and optimizing for profit works fine for them. It even has a pretty heavy overlap with optimizing for good products, so that's nice.

It's fine. We agree on the ideal outcome in this situation.


What would really help is for people to understand that that's the "spirit of the law" and the "letter of the law".

People don't want the letter of the law enforced, they want the spirit. Using the example from above, speed limits were made for safety. They were set at a time and surprise, cars got safer. So people feel safer driving faster. They're breaking the letter of the law but not the spirit.

I actually like to use law as an example of the limitations of natural languages. Because legalese is an attempt to formalize natural language, yet everyone seems to understand how hard it is to write good rules and how easy it is to find loopholes. But those are only possible if you enforce the letter of the law. Loopholes still exist but are much harder to circumvent with the spirit of the law. But it's also more ambiguous, so not without faults. You have to use some balance.


>cars have gotten safer

For their inhabitants, maybe not pedestrians though. Speeding laws are in part for protecting pedestrians


In general cars have also gotten safer for pedestrians[0]. Modern cars are lighter and made of plastic. There's better visibility, sensors, and for most vehicles the shape of the car has improved things.

American trucks are an interesting counter example but that's a more complicated issue. (The source has a comment that you can infer this being a concern with trucks but there's also a lot of sources on this that you can easily find)

[0] https://www.fox7austin.com/news/data-40-year-high-auto-ped-d...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: