I highly suspect that this is a Levoit air purifier. I recently purchased a Levoit 300S and had the same issue. The VeSync app connects the device directly over the internet and you can control it via an API on their domain with a username and password. Your air purifier is then a backdoor to your home network. I just put it on a guest network now rather than go through this.
I suspect hiding the manufacturer/model was very much on purpose, they blurred the markings on the PCB and hid the domain name for the manufacturer's API calls (and in the console logs as well).
I guess that is on purpose. After all the article could easily be rewritten as a successful attack on the manufacturer infra using a private key extracted from a device.
So the Authors Home Assistant Integration could be at risk to stop working quite quickly...
The rich might be more exposed to stock market fluctuations, but they are fundamentally in a better place to leverage the changes in the market to either improve or at the very least maintain their position.
The real impact will be felt by the people on the lowest end of the financial wealth scale.
If the price of food, energy, clothing, housing, appliances, etc goes up - that means a lot more to those with the lowest wealth. The more wealthy you are, the less that is going to have a direct impact on you.
If this were "eating the rich" then we'd see direct taxation or asset seizing from the wealthy, not raising of tariffs.
I can't speak for all orgs, but I'm guessing it was something along the lines of "There's plenty of competition out there, we're paying them good money and they're profitable - why would they screw up a good thing?"
Almost every organisations will choose current cost over potential future issues to a greater or lesser degree. Backups most will spend a reasonable amount on ensuring they have them working. A cold-standby DR site, fewer.
I doubt many are defensively engineering their organisational IT strategy to run two competing platforms at every level just incase one gets bought out by the mob who turns up the pricing by several multiples.
We had to figure this out the hard way, and ended up with this approach (approximately).
K8S provides two (well three, now) health checks.
How this interacts with ALB is quite important.
Liveness should always return 200 OK unless you have hit some fatal condition where your container considers itself dead and wants to be restarted.
Readiness should only return 200 OK if you are ready to serve traffic.
We configure the ALB to only point to the readiness check.
So our application lifecycle looks like this:
* Container starts
* Application loads
* Liveness begins serving 200
* Some internal health checks run and set readiness state to True
* Readiness checks now return 200
* ALB checks begin passing and so pod is added to the target group
* Pod starts getting traffic.
time passes. Eventually for some reason the pod needs to shut down.
* Kube calls the preStop hook
* PreStop sends SIGUSR1 to app and waits for N seconds.
* App handler for SIGUSR1 tells readiness hook to start failing.
* ALB health checks begin failing, and no new requests should be sent.
* ALB takes the pod out of the target group.
* PreStop hook finishes waiting and returns
* Kube sends SIGTERM
* App wraps up any remaining in-flight requests and shuts down.
This allows the app to do graceful shut down, and ensures the ALB doesn't send traffic to a pod that knows it is being shut down.
Oh, and on the Readiness check - your app can use this to (temporarily) signal that it is too busy to serve more traffic. Handy as another signal you can monitor for scaling.
A lot of this seems like the fault of the ALB, is it? I had the same problem and eventually moved off of it to cloudflare tunnels pointed at service load balancers directly, which changed immediately when pods went bad. With a grace period for normal shutdowns, I haven't seen any downtime for deploys or errors.
The issue with the above setup is (maybe I'm doing it wrong?) but if a pod is removed suddenly, say if it crashes, then some portion of traffic gets errors until the ALB updates. And that can be an agonizingly long time, which seemed because it's pointed at IP addresses in the cluster and not the service. It seemed like a shortcoming of the ALB. GKE doesn't have the same behavior.
I'm not the expert but found something that worked.
> A lot of this seems like the fault of the ALB, is it?
I definitely think the ALB Controller should be taking a more active hand in termination of pods that are targets of an ALB.
But the ALB Controller is exhibiting the same symptom I keep running into throughout Kubernetes.
The amount of "X is a problem because the pod dies too quickly before Y has a chance to clean up/whatever, so we add a preStop sleep of 30 seconds" in the Kubernetes world is truly frustrating.
If you are referring the 30 seconds to kill time, that would be holding it wrong. As long as your process is PID 1, you can rig up your own process exit handlers, which completely resolves the problem.
Many people don’t run the main process in the container as PID 1, so this “problem” remains.
If it’s not feasible to remove something like a shell process from being the first thing that runs, exec will allow replacing the shell process with the application process.
> If you are referring the 30 seconds to kill time, that would be holding it wrong. As long as your process is PID 1, you can rig up your own process exit handlers, which completely resolves the problem.
Maybe I am holding it wrong. I'd love not to have to do this work.
But I don't see how being PID 1 or not helps (and yes, for most workloads it is PID 1)
The ALB controller is the one that would need to deregister a target from the target group, and it won't until the pod is gone. So we have to force it by having the app do the functional equivalent with the readiness check.
If I understand correctly, because ALB does its own health checks, you need to catch TERM, wait 30s while returning non-ready for ALB to have time to notice, then clean up and shut down.
Kubernetes was written by people who have developer, not ops, background and is full of things like this. The fact that it became a standard is a disaster
Maybe, or maybe orchestration and load balancing is hard. I think it's too simplistic to dismiss k8s development because the devs weren't ops.
I don't know of a tool that does a significantly better job at this without having other drawbacks and gotchas, and even if it did it doesn't void the value k8s brings.
I have my own set of gripes with software production engineering in general and specially with k8s, having seen first hand how much effort big corps have to put just to manage a cluster, but it's disrespectful to qualify this whole endeavour as disastrous.
Guys who wrote it are ok, they put a lot of effort and that's fine. If I understand things correctly, they were also compensated well. But the effort based on some wrong assumptions makes a product that is flawed. Lot of people then are forced to use it because there is no alternative, or the alternatives are easily dismissed - behavior based in turn on a certain propaganda and marketing. And that part is a disaster. This is not personal, btw.
Pod Readiness Gates, unless I'm missing something, only help on startup.
Unless something has changed since I last went digging into this. You will still have the ALB sending traffic to a pod that's in terminating state, unless you do the preStop bits I talked about in the top of the thread.
> You don't need the preStop scam as long as your workload respects SIGTERM and does lame-duck.
Calling it a scam is a bit much.
I think having to put the logic of how the load balancer works into the application is a crossing of concerns.
This kind of orchestration does not belong in the app, it belongs in the supporting infrastructure.
The app should not need to know how the load balancer works with regards to scheduling.
The ALB Controller should be doing this. It does not, and so we use preStop until/unless the ALB controller figures it out.
Yes, the app needs to listen for SIGTERM and wait until it's outstanding requests are completed before exiting - but not more than that.
Presumably, because it'd be annoying waiting for lame duck mode when you actually do want the application to terminate quickly. SIGKILL usually needs special privileges/root and doesn't give the application any time to clean-up/flush/etc. The other workaround I've seen is having the application clean-up immediately upon a second signal, which I reckon could also work, but either solution seems reasonable.
Using SIGTERM is a problem because it conflicts with other behavior.
For instance, if you use SIGTERM for this then you have a potential for the app quitting during the preStop, which will be detected as a crash by Kube and so restart your app.
We don't want to kill in-flight requests - terminating while a request is outstanding will result in clients connected to the ALB getting some HTTP 5xx response.
The AWS ALB Controller inside Kubernetes doesn't give us a nice way to specifically say "deregister this target"
The ALB will continue to send us traffic while we return 'healthy' to it's health checks.
So we need some way to signal the application to stop serving 'healthy' responses to the ALB Health Checks, which will force the ALB to mark us as unhealthy in the target group and stop sending us traffic.
SIGUSR1 was an otherwise unused signal that we can send to the application without impacting how other signals might be handled.
So I might be putting words in your mouth, so please correct me if this is wrong. It seems like you don’t actually control the SIGTERM handler code. Otherwise you could just write something like:
I don't think it matters the framework, it's an issue with the ALB controller itself, not the application.
The ALB controller doesn't handle gracefully stopping traffic (by ensuring target group de-registration is complete) before allowing the pod to terminate.
Without a preStop, Kube immediately sends SIGTERM to your application.
Or nginx. In both cases it’s probably more expensive than an ALB but you have better integration with the app side, plus traffic mesh benefits if you’re using istio. The caveat is that you are managing your own public-facing nodes.
AWS's documentation about operating AWSs products and services I assume are correct and accurate.
If I follow their guides and end up breaking something because it turns out they forgot to tell me about something important, I would consider that a breach of warranty.
The only reason I can think of where documentation should be behind a pay/auth-wall is if it was generated specifically for your circumstances as part of some kind of solutions architecture or technical support process.
> The only reason I can think of where documentation should be behind a pay/auth-wall is if it was generated specifically for your circumstances as part of some kind of solutions architecture or technical support process.
That’s exactly what this is, actually. These aren’t the AWS docs. The linked site is the AWS support forums in which AWS employees, AWS support and/or other known contributors from the AWS community will give you personalized responses to questions you have.
Some of these questions are more generic, and so aren’t personalized, but they’re still the result of a question-and-answer on the forum.
I disagree that these are specific enough to warrant being private. I'm thinking more along the lines of your very specific resources/application/architecture.
This is more like a curated version of Stack Overflow.
Slack does, or did, do this. I believe Trello, too.
I found out about this because I noticed our Slack bill was quite a lot lower over some Christmas/January period. It was because so many folks were away, and so they didn't charge us for seats that were inactive for > 30 days.
Yes, lots of businesses charge based on MAU. You can pre-pay for a certain MAU, which will get you a lower price per user, but at the expense of paying even if they aren't used. Which is fair enough.
Sorry, but the issue is not the standards for SMS/MMS. Yes, they're old standards, and have size limitations.
It's entirely up to Apple whether to make their iMessage platform available on other platforms.
They've shown they're quite invested in keeping it to running on Apple hardware only by going after and blocking any 3rd party attempt to provide iMessage compatible clients.
Don't trust incoming calls, text messages or emails.
Don't trust caller ID on your phone.
If someone calls you asking for information or to do something, ask for a case id or reference number. Hang up, call back on a number you get from a previous bill, back of your credit card, or by googling the company.
If anyone is pushing for something to be done urgently, stop. Hang up, don't take any action. Call a trusted other person and talk to them about it.
> Don't trust incoming calls, text messages or emails.
> Don't trust caller ID on your phone.
And if you're anyone of moderate fame, importance or cryptocurrency holdings, call back using a phone other than the one you received the call from. SS7 attacks remain relatively cheap , and redirecting an outgoing phone call placed to a phone number they know you're likely to call next is within the realm of feasible attacks.
I recently saved a friend from getting scammed when she wanted to buy an audiobook, we wanted to avoid Audible because of the DRM and limited device selection.
The first few results she got were some shady services offering really good deals. I looked them up on trustpilot and they weren't outright scams, but they aggressively pushed you into getting an expensive monthly subscription that was basically impossible to cancel.
I have an entirely seperate phone and SIM for any service I think is important which demands a phone number. (prepay, I top it up by €5 every few months. Sadly I can't automate that)
I generally refuse to give any number if I can help it. I can count on one hand the number of unsolicited calls/texts I've gotten.
Knowing that might help influence purchasing decisions for those also interested in a "sleek" air purifier that contains an ESP32.