I did a course on antennae and wave propagation as part of my university studies. The professor told us that with just a bit of extra studying we'd be able to qualify for the license, given that we already knew the engineering and physics side of things. I got the license, but never did anything with it.
I worked for a company where we punched in using an iButton (it's a pretty neat 1-wire thing that fits on a key chain).
The punch clock system was logged and then at the end of the month, they printed out a single A4 sheet for every employee for us to make corrections and sign. Of course, someone had the unenviable job of going over all those and applying the corrections.
We also had to write down hours spent on different projects in a completely different system that wasn't at all integrated with the punch clock system.
At some point in the last couple of years that I worked there, they switched to Workday. That was not an improvement.
I worked for a company that went through 2 cycles like this and I can report that it had zero effect on us engineers.
My impression was that people were constantly being promoted into management and at some point we just had too many managers and that's why it was done. Of course, when you know this, the question becomes: why allow things to get to this point in the first place?
Presumably because people expect to be promoted periodically, so they pile up on the high end until the symptom gets corrected all at once. A realistic (but quite controversial) solution might be to emulate other companies that have done away with most of the promotion hierarchy. Different roles but more or less standardized pay across all employees and an understanding that promotions aren't a thing. Rather than climbing a ladder you're there to get shit done.
Once engineering starts out earning them by a wide enough margin management will become insecure. /s
I actually am curious why this isn't a more commonplace practice. Why would we build systems that keep accumulating managers at the expense of skilled senior engineers?
Bisect is one of those things where if you're on a certain kind of project, it's really useful, and if you're not on that kind of project you never need it.
If the contributor count is high enough (or you're otherwise in a role for which "contribution" is primarily adjusting others' code), or the behaviors that get reported in bugs are specific and testable, then bisect is invaluable.
If you're in a project where buggy behavior wasn't introduced so much as grew (e.g. the behavior evolved A -> B -> C -> D -> E over time and a bug is reported due to undesirable interactions between released/valuable features in A, C, and E), then bisecting to find "when did this start" won't tell you that much useful. If you often have to write bespoke test scripts to run in bisect (e.g. because "test for presence of bug" is a process that involves restarting/orchestrating lots of services and/or debugging by interacting with a GUI), then you have to balance the time spent writing those with the time it'd take for you to figure out the causal commit by hand. If you're in a project where you're personally familiar with roughly what was released when, or where the release process/community is well-connected, it's often better to promote practices like "ask in Slack/the mailing list whether anyone has made changes to ___ recently, whoever pipes up will help you debug" rather than "everyone should be really good at bisect". Those aren't mutually exclusive, but they both do take work to install in a community and thus have an opportunity cost.
This and many other perennial discussions about Git (including TFA) have a common cause: people assume that criticisms/recommendations for how to use Git as a release coordinator/member of a disconnected team of volunteers apply to people who use Git who are members of small, tightly-coupled teams of collaborators (e.g. working on closed-source software).
> If you're in a project where buggy behavior wasn't introduced so much as grew (e.g. the behavior evolved A -> B -> C -> D -> E over time and a bug is reported due to undesirable interactions between released/valuable features in A, C, and E), then bisecting to find "when did this start" won't tell you that much useful.
I actually think that is the most useful time to use bisect. Since this is a situation where the cause isn't immediately obvious, looking through code can make those issues harder to find.
I'm glad it works for you! I may not have described the situation super clearly: most bugs I triage are either very causally shallow (i.e. they line up exactly with a release or merge, or have an otherwise very well-known cause like "negative input in this form field causes ISE on submit"), or else they're causally well understood but not immediately solvable.
For example, take a made up messaging app. Let's call it ButtsApp. Three big ButtsApp releases releases happened in order that add the features: 1) "send messages"; 2) "oops/undo send"; and 3) "accounts can have multiple users operating on them simultaneously". All of these were deemed to be necessary features and released over successive months.
Most of the bugs that I've spent lots of time diagnosing in my career are of the interacting-known-features variety. In that example, it would be "user A logs in and sends a message, but user B logs in and can undo the sends of user A" or similar. I don't need bisect to tell me that the issue only became problematic when multi-user support was released, but that release isn't getting rolled back. The code triggering the bug is in the undo-send feature that was released months ago, and the offending/buggy action is from the original send-message feature.
Which commit is at fault? Some combination of "none of them" and "all of them". More importantly: is it useful to know commit specifics if we already know that the bug is caused by the interaction of a bunch of separately-released features? In many cases, the "ballistics" of where a bug was added to the codebase are less important.
Again, there are some projects where bisect is solid gold--projects where the bug triage/queue person is more of a traffic cop than a feature/area owner--but in a lot of other projects, bugs are usually some combination of trivially easy to root-cause and/or difficult to fix regardless of whether the causal commit is identified.
Git bisect is a wonder, especially combined with its ability to potentially do the success/fail testing on its own (with the help of some command you provide).
It is a tragedy that more people don't know about it.
Yes, in fact, the protocol states that the client can queue up multiple requests. The purpose of this is to fill up the gap created by the RTT. It is actually quite elegant in its simplicity.
An extension was introduced for continuous updates that allows the server to push frames without receiving requests, so this isn't universally true for all RFB (VNC) software. This is implemented in TigerVNC and noVNC to name a few.
Of course, continuous updates have the buffer-bloat problem that we're all discussing, so they also implemented fairly complex congestion control on top of the whole thing.
Effectively, they just moved the role of congestion control over to the server from the client while making things slightly more complicated.
I have some experience with pushing video frames over TCP.
It appears that the writer has jumped to conclusions at every turn and it's usually the wrong one.
The reason that the simple "poll for jpeg" method works is that polling is actually a very crude congestion control mechanism. The sender only sends the next frame when the receiver has received the last frame and asks for more. The downside of this is that network latency affects the frame rate.
The frame rate issue with the polling method can be solved by sending multiple frame requests at a time, but only as many as will fit within one RTT, so the client needs to know the minimum RTT and the sender's maximum frame rate.
The RFB (VNC) protocol does this, by the way. Well, the thing about rtt_min and frame rate isn't in the spec though.
Now, I will not go though every wrong assumption, but as for this nonsense about P-frames and I-frames: With TCP, you only need one I-frame. The rest can be all P-frames. I don't understand how they came to the conclusion that sending only I-frames over TCP might help with their latency problem. Just turn off B-frames and you should be OK.
The actual problem with the latency was that they had frames piling up in buffers between the sender and the receiver. If you're pushing video frames over TCP, you need feedback. The server needs to know how fast it can send. Otherwise, you get pile-up and a bunch of latency. That's all there is to it.
The simplest, absolutely foolproof way to do this is to use TCP's own congestion control. Spin up a thread that does two things: encodes video frames and sends them out on the socket using a blocking send/write call. Set SO_SNDBUF on that socket to a value that's proportional to your maximum latency tolerance and the rough size of your video frames.
One final bit of advice: use ffmpeg (libavcodec, libavformat, etc). It's much simpler to actually understand what you're doing with that than some convoluted gstreamer pipeline.
This reminds me of my own troubles with my AEG washing machine.
Probably, the most important lesson (for someone who wants to fix their washing machine ASAP) that I learned from that was that there are non-userserviceable error codes and you need to perform an undocumented procedure on your machine to get those codes. I wrote about it in more detail here: https://andri.yngvason.is/repairing-the-washing-machine.html
I would have loved to have an open source diagnostics dongle for my AEG. Maybe next time I'll try and make one. :)
After having used their repair service for over 10 times for my dishwasher during its warranty period and having broken off its front handle (well, the entire front panel really) after 2 more years, I'm never buying an AEG device ever again. I opened it up and fixed it myself, and oh my god, the whole thing just screamed cost cutting. They literally used the power button of a different model or machine, and then just mounted a different power button on top that presses the underlying one. And of course the load bearing thing that holds the front panel and display onto the door frame is a just two tiny bolts in the corners. Great idea to have the entire thing flex constantly in one place. Absolute junk.
This is the downward spiral for a lot of brands. They sell out to an investor, who uses their brand reputation inertia, reduces cost and quality, etc. There's barely any brands left. IIRC Miele is still one of the few good brands for home appliances, but they're also significantly more expensive. At least for the initial purchase, I'm sure it evens out long term.
How new is your washing machine? Mine (US market, Electrolux branded) displays fault codes through the main 8 segment LCD and makes component tests available from that same diagnostic menu. Service literature was available directly from Electrolux — from a paid service with a free trial, although there are plenty of youtube videos covering the same information.
The blog post that I linked to answers your question.
I was able to get at the diagnostics menu (also explained in the blog), but I had to interrogate a service tech in order to learn how to trigger it (also mentioned in the blog).
The manual did not contain this information and I could not find it via Google.
Where did you sign up for the "paid service with a free trial"?
I've used valgrind proactively as long as I've been programming in C and C++.
The errors that are caught by valgrind (and asan) will very often not cause immediate crashes or any kind of obvious bugs. Instead, they cause intermittent bugs that are very hard to catch otherwise. They may even be security vulnerabilities.
Another good reason for using it proactively is that small memory leaks tend to sneak in over time. They might not cause any issues and most of them aren't accumulative. However, having all those leaks in there makes it much more difficult to find big leaks when they happen because you have to sift through all the insignificant ones before you can fix the one that's causing problems.