Hacker Newsnew | past | comments | ask | show | jobs | submit | more paranoidrobot's commentslogin

As far as I can tell it doesn't mention which air purifier.

Knowing that might help influence purchasing decisions for those also interested in a "sleek" air purifier that contains an ESP32.


I highly suspect that this is a Levoit air purifier. I recently purchased a Levoit 300S and had the same issue. The VeSync app connects the device directly over the internet and you can control it via an API on their domain with a username and password. Your air purifier is then a backdoor to your home network. I just put it on a guest network now rather than go through this.


I suspect hiding the manufacturer/model was very much on purpose, they blurred the markings on the PCB and hid the domain name for the manufacturer's API calls (and in the console logs as well).


I agree, hopefully it helps not getting the article taken down because its a very good primer on getting any ESP based device locally working.


I guess that is on purpose. After all the article could easily be rewritten as a successful attack on the manufacturer infra using a private key extracted from a device.

So the Authors Home Assistant Integration could be at risk to stop working quite quickly...


There are youtube channels with old TV Ads. Some of those will be non-existant.

You could also go down the Turbo Encabulator rabbit hole:

https://www.youtube.com/watch?v=Ac7G7xOG2Ag https://www.youtube.com/watch?v=RXJKdh1KZ0w


The rich might be more exposed to stock market fluctuations, but they are fundamentally in a better place to leverage the changes in the market to either improve or at the very least maintain their position.

The real impact will be felt by the people on the lowest end of the financial wealth scale.

If the price of food, energy, clothing, housing, appliances, etc goes up - that means a lot more to those with the lowest wealth. The more wealthy you are, the less that is going to have a direct impact on you.

If this were "eating the rich" then we'd see direct taxation or asset seizing from the wealthy, not raising of tariffs.


> What did you expect?

I can't speak for all orgs, but I'm guessing it was something along the lines of "There's plenty of competition out there, we're paying them good money and they're profitable - why would they screw up a good thing?"

Almost every organisations will choose current cost over potential future issues to a greater or lesser degree. Backups most will spend a reasonable amount on ensuring they have them working. A cold-standby DR site, fewer.

I doubt many are defensively engineering their organisational IT strategy to run two competing platforms at every level just incase one gets bought out by the mob who turns up the pricing by several multiples.


We had to figure this out the hard way, and ended up with this approach (approximately).

K8S provides two (well three, now) health checks.

How this interacts with ALB is quite important.

Liveness should always return 200 OK unless you have hit some fatal condition where your container considers itself dead and wants to be restarted.

Readiness should only return 200 OK if you are ready to serve traffic.

We configure the ALB to only point to the readiness check.

So our application lifecycle looks like this:

* Container starts

* Application loads

* Liveness begins serving 200

* Some internal health checks run and set readiness state to True

* Readiness checks now return 200

* ALB checks begin passing and so pod is added to the target group

* Pod starts getting traffic.

time passes. Eventually for some reason the pod needs to shut down.

* Kube calls the preStop hook

* PreStop sends SIGUSR1 to app and waits for N seconds.

* App handler for SIGUSR1 tells readiness hook to start failing.

* ALB health checks begin failing, and no new requests should be sent.

* ALB takes the pod out of the target group.

* PreStop hook finishes waiting and returns

* Kube sends SIGTERM

* App wraps up any remaining in-flight requests and shuts down.

This allows the app to do graceful shut down, and ensures the ALB doesn't send traffic to a pod that knows it is being shut down.

Oh, and on the Readiness check - your app can use this to (temporarily) signal that it is too busy to serve more traffic. Handy as another signal you can monitor for scaling.

e: Formatting was slightly broken.


A lot of this seems like the fault of the ALB, is it? I had the same problem and eventually moved off of it to cloudflare tunnels pointed at service load balancers directly, which changed immediately when pods went bad. With a grace period for normal shutdowns, I haven't seen any downtime for deploys or errors.

The issue with the above setup is (maybe I'm doing it wrong?) but if a pod is removed suddenly, say if it crashes, then some portion of traffic gets errors until the ALB updates. And that can be an agonizingly long time, which seemed because it's pointed at IP addresses in the cluster and not the service. It seemed like a shortcoming of the ALB. GKE doesn't have the same behavior.

I'm not the expert but found something that worked.


> A lot of this seems like the fault of the ALB, is it?

I definitely think the ALB Controller should be taking a more active hand in termination of pods that are targets of an ALB.

But the ALB Controller is exhibiting the same symptom I keep running into throughout Kubernetes.

The amount of "X is a problem because the pod dies too quickly before Y has a chance to clean up/whatever, so we add a preStop sleep of 30 seconds" in the Kubernetes world is truly frustrating.


If you are referring the 30 seconds to kill time, that would be holding it wrong. As long as your process is PID 1, you can rig up your own process exit handlers, which completely resolves the problem.

Many people don’t run the main process in the container as PID 1, so this “problem” remains.

If it’s not feasible to remove something like a shell process from being the first thing that runs, exec will allow replacing the shell process with the application process.


> If you are referring the 30 seconds to kill time, that would be holding it wrong. As long as your process is PID 1, you can rig up your own process exit handlers, which completely resolves the problem.

Maybe I am holding it wrong. I'd love not to have to do this work.

But I don't see how being PID 1 or not helps (and yes, for most workloads it is PID 1)

The ALB controller is the one that would need to deregister a target from the target group, and it won't until the pod is gone. So we have to force it by having the app do the functional equivalent with the readiness check.


Yeah, exactly. We just catch the TERM, clean up, and then shut down. But the rest of the top post in the thread is right on.


If I understand correctly, because ALB does its own health checks, you need to catch TERM, wait 30s while returning non-ready for ALB to have time to notice, then clean up and shut down.


Kubernetes was written by people who have developer, not ops, background and is full of things like this. The fact that it became a standard is a disaster


Maybe, or maybe orchestration and load balancing is hard. I think it's too simplistic to dismiss k8s development because the devs weren't ops.

I don't know of a tool that does a significantly better job at this without having other drawbacks and gotchas, and even if it did it doesn't void the value k8s brings.

I have my own set of gripes with software production engineering in general and specially with k8s, having seen first hand how much effort big corps have to put just to manage a cluster, but it's disrespectful to qualify this whole endeavour as disastrous.


Guys who wrote it are ok, they put a lot of effort and that's fine. If I understand things correctly, they were also compensated well. But the effort based on some wrong assumptions makes a product that is flawed. Lot of people then are forced to use it because there is no alternative, or the alternatives are easily dismissed - behavior based in turn on a certain propaganda and marketing. And that part is a disaster. This is not personal, btw.


> A lot of this seems like the fault of the ALB, is it?

People forget to enable pod readiness gates.


Pod Readiness Gates, unless I'm missing something, only help on startup.

Unless something has changed since I last went digging into this. You will still have the ALB sending traffic to a pod that's in terminating state, unless you do the preStop bits I talked about in the top of the thread.

https://kubernetes-sigs.github.io/aws-load-balancer-controll...


> Pod Readiness Gates, unless I'm missing something, only help on startup.

Also allows graceful rollout of workload.

> You will still have the ALB sending traffic to a pod that's in terminating state

The controller watches endpoints and will remove your pod from target group on pod deletion.

You don't need the preStop scam as long as your workload respects SIGTERM and does lame-duck.


> You don't need the preStop scam as long as your workload respects SIGTERM and does lame-duck.

Calling it a scam is a bit much.

I think having to put the logic of how the load balancer works into the application is a crossing of concerns. This kind of orchestration does not belong in the app, it belongs in the supporting infrastructure.

The app should not need to know how the load balancer works with regards to scheduling.

The ALB Controller should be doing this. It does not, and so we use preStop until/unless the ALB controller figures it out.

Yes, the app needs to listen for SIGTERM and wait until it's outstanding requests are completed before exiting - but not more than that.


Just curious:

- so if pod goes to terminating state

- with gates enabled, alb controller should remove it from targets instantly coz it listens to k8s api pod changes stream ?

In my experience there was ALWAYS some delay even a small one in High Frequency systems which caused 500s.

Which we solved with internal apigateway, aws+iptables+cni was always causing issues in every setup without it.


Racing against an ASG/ALB combo is always a horrifying adrenaline rush.


Nobody should be using ASG's anymore. EKS Auto Mode or Karpenter.


Why the additional SUGUSR1 vs just doing those (failing health, sleeping) on SIGTERM?


Presumably, because it'd be annoying waiting for lame duck mode when you actually do want the application to terminate quickly. SIGKILL usually needs special privileges/root and doesn't give the application any time to clean-up/flush/etc. The other workaround I've seen is having the application clean-up immediately upon a second signal, which I reckon could also work, but either solution seems reasonable.


Yeah, there were a bunch of reasons.

Using SIGTERM is a problem because it conflicts with other behavior.

For instance, if you use SIGTERM for this then you have a potential for the app quitting during the preStop, which will be detected as a crash by Kube and so restart your app.


> which will be detected as a crash by Kube and so restart your app.

I don't think kubernetes restarts pods that have been marked for termination


We have a number of concurrent issues.

We don't want to kill in-flight requests - terminating while a request is outstanding will result in clients connected to the ALB getting some HTTP 5xx response.

The AWS ALB Controller inside Kubernetes doesn't give us a nice way to specifically say "deregister this target"

The ALB will continue to send us traffic while we return 'healthy' to it's health checks.

So we need some way to signal the application to stop serving 'healthy' responses to the ALB Health Checks, which will force the ALB to mark us as unhealthy in the target group and stop sending us traffic.

SIGUSR1 was an otherwise unused signal that we can send to the application without impacting how other signals might be handled.


So I might be putting words in your mouth, so please correct me if this is wrong. It seems like you don’t actually control the SIGTERM handler code. Otherwise you could just write something like:

  sigterm_handler() {
    make_healthcheck_fail();
    sleep(20);
    stop_web_server();
    exit(0);
  }
Technically the server shutdown at the end doesn’t even need to be graceful in this case.


Curious, which framework are you using? I've had no issues with NodeJS, Go, and Rust apps directly behind ALB with IP-Target.


I don't think it matters the framework, it's an issue with the ALB controller itself, not the application.

The ALB controller doesn't handle gracefully stopping traffic (by ensuring target group de-registration is complete) before allowing the pod to terminate.

Without a preStop, Kube immediately sends SIGTERM to your application.


Istio automates this (at the risk of adding more complexity)


Or nginx. In both cases it’s probably more expensive than an ALB but you have better integration with the app side, plus traffic mesh benefits if you’re using istio. The caveat is that you are managing your own public-facing nodes.


You can have ALB terminate HTTPS with a publically trusted CA cert then relay to Istio.

We do ALB Ingress to Istio Gateways

If you have a CDN, DDoS, or WAF you might be able to terminate there as well

Extra hop tho


> App handler for SIGUSR1 tells readiness hook to start failing.

Doesn't the kubernetes pod shutdown already mark the pod as not-ready before it calls the pre-stop hook?


> It doesn't solve data sovereignty in a practical way for anything other than backups

In my experience, customers who have specific requirements about data sovereignty are also explicitly including backups in those requirements.


> like guarantees of correctness,

Yeah, no. I have to strongly disagree with this.

AWS's documentation about operating AWSs products and services I assume are correct and accurate.

If I follow their guides and end up breaking something because it turns out they forgot to tell me about something important, I would consider that a breach of warranty.

The only reason I can think of where documentation should be behind a pay/auth-wall is if it was generated specifically for your circumstances as part of some kind of solutions architecture or technical support process.


> The only reason I can think of where documentation should be behind a pay/auth-wall is if it was generated specifically for your circumstances as part of some kind of solutions architecture or technical support process.

That’s exactly what this is, actually. These aren’t the AWS docs. The linked site is the AWS support forums in which AWS employees, AWS support and/or other known contributors from the AWS community will give you personalized responses to questions you have.

Some of these questions are more generic, and so aren’t personalized, but they’re still the result of a question-and-answer on the forum.


> That’s exactly what this is, actually.

I disagree that these are specific enough to warrant being private. I'm thinking more along the lines of your very specific resources/application/architecture.

This is more like a curated version of Stack Overflow.


Slack does, or did, do this. I believe Trello, too.

I found out about this because I noticed our Slack bill was quite a lot lower over some Christmas/January period. It was because so many folks were away, and so they didn't charge us for seats that were inactive for > 30 days.


Yes, lots of businesses charge based on MAU. You can pre-pay for a certain MAU, which will get you a lower price per user, but at the expense of paying even if they aren't used. Which is fair enough.


Sorry, but the issue is not the standards for SMS/MMS. Yes, they're old standards, and have size limitations.

It's entirely up to Apple whether to make their iMessage platform available on other platforms.

They've shown they're quite invested in keeping it to running on Apple hardware only by going after and blocking any 3rd party attempt to provide iMessage compatible clients.


The general advice I give is:

Don't trust incoming calls, text messages or emails.

Don't trust caller ID on your phone.

If someone calls you asking for information or to do something, ask for a case id or reference number. Hang up, call back on a number you get from a previous bill, back of your credit card, or by googling the company.

If anyone is pushing for something to be done urgently, stop. Hang up, don't take any action. Call a trusted other person and talk to them about it.


> Don't trust incoming calls, text messages or emails.

> Don't trust caller ID on your phone.

And if you're anyone of moderate fame, importance or cryptocurrency holdings, call back using a phone other than the one you received the call from. SS7 attacks remain relatively cheap , and redirecting an outgoing phone call placed to a phone number they know you're likely to call next is within the realm of feasible attacks.


>by googling the company

And remember it's going to be the 4th or 5th link down, not the first.


This is one of many reasons why ad blocking isn't just cosmetic: it's a best practice for security too!


So much this.

I recently saved a friend from getting scammed when she wanted to buy an audiobook, we wanted to avoid Audible because of the DRM and limited device selection.

The first few results she got were some shady services offering really good deals. I looked them up on trustpilot and they weren't outright scams, but they aggressively pushed you into getting an expensive monthly subscription that was basically impossible to cancel.


Yeah the first will be the scammers with a highly targeted ad buy, and who can even tell the difference between ads and search results these days?


All of my search results are search results, because of ad blockers.

Now, many of my search results suck due to SEO and whatnot, but that’s a different story.


> If anyone is pushing for something to be done urgently, stop. Hang up, don't take any action. Call a trusted other person and talk to them about it.

This is honestly the #1 piece of advice to give friends and family.

Almost every scam is predicated on urgency.


I have an entirely seperate phone and SIM for any service I think is important which demands a phone number. (prepay, I top it up by €5 every few months. Sadly I can't automate that)

I generally refuse to give any number if I can help it. I can count on one hand the number of unsolicited calls/texts I've gotten.


I really wish phones would change the UI to make it more obvious that caller ID shouldn't be trusted.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: