I think part of the problem with the bullying described in the article and the "approve everything" approach starts with the framing of code reviews.
Instead of "please write this code, submit a pull request, and I'll tell you what you did wrong," I have recently been saying to new coders "take a shot at this problem, and when you've got something working let's get together and refine it, there is some context it will take time to learn so we'll probably need to make a lot of changes or even rewrite it before release."
That way they know what's coming (a lot of changes), they know why it isn't a sign they're inadequate, and it's more about working together than passing judgment.
It has also helped reduce the massive delays while junior developers agonize over making something perfect.
Ask for some time to walk through what you've done with someone.
I've been on the receiving end of enough toxic code reviews to develop an aversion to do them. So when asked to do them, I really just ask, did you test it?, do you want to demo it to me?, then approve it.
When people demo to me, I can see their pain points, because they always point them out. "Oh yeah, I was having trouble getting this to work, as you can see, it's still a little slow" and it gives them an opportunity to ask for advice or merely to vent.
So if you want good, honest feedback, a demo is the way to go. Honestly, it's hard to really grok what a bit of code is doing just by reading it. And who has time to pull code and run through it solo when doing a review?
Super critical code review culture can also end up with balls of mud. The two are not exclusive. Example is a team I was on that would critique the hell out of syntax and grammar in javadocs, formatting of annotations arguments, stuff like that, fine. But never look outside the source file for how any of it fit together. No design planning cause "code reviews". End result was tons of duplicated code everywhere.
I've submitted code that was reviewed like that, and it was super frustrating. Stuff like, "your javadoc uses invalid markup", when submitting code to a project that has almost zero javadoc to begin with.
I mean, sure it doesn't render well as HTML, but its readable with the code and the rest of the project has none so...
It might mean you're doing it basically correct, or it might mean you're touching parts that people don't care about much. The second is probably more common.
(It might also mean your team doesn't care about anything any more, in which case look for a better opportunity, unless you want to just cruise along for a while for whatever reason, in which case turn into a bare minimum performer like the rest of your team).
I'm a senior engineer in a known company and my target is always to leave no comments on a patch. I only give question-comments if I absolutely must know something before merging, or please-fix comments if something is definitely wrong and I can prove it. Also, if it's a part of the system that poses little risk (for example, if it breaks I can assign the bug back to the author and patiently wait for a fix with no serious damage) my tolerance for bugs greatly increases. The last part I'm not too proud of but frankly I don't have time to review everything. A "fix" for the last part would be to find a lower rank reviewer-buddy who has more time to share knowledge.
Thank you! I agree with so much in this, it really strikes the right balance of clean code vs productivity.
One of my peeves is a review that points out a bug/incorrectness without offering proof, or suggesting the fix/correct way. The reviewer might observe a code smell, but it's so much better dealt with a question-comment, e.g. "did you try?" vs "this looks wrong".
I generally aim to make every comment a question ("can we..."), and not make assumptions that the author didn't think of something already. Unless I just stating an objective fact that is clearly unknown e.g. "this duplicates library function x...".
Yeah the general style of putting comments into questions is much better.
But that's not exactly what I meant with question-comments, I sometimes genuinely don't know why are they doing something like this, or have to ask for additional testing/evaluation/analysis/profiling results before I can decide that it's most likely not going to blow anything up. And as long as there's low probability of damage (or rather low expected value of damage) I am happy with the change :)
What really helps is a style guide. It shuts most of those pointless style nitpicks, God I hate them.
I did it for three decades (3.5?) and the only code reviews were with friends of mine who worked at multiple companies together. The whole point of the review was to find outright errors, not to optimize. Scarcely any ego involvement.
It worked out fine I think and, mind you, it used to be a lot harder to push out a fix. A lot harder.
Yeah, maybe I should add, we would hang out in the hall and whiteboard our ideas, workflows before we went back into our respective cubicles/offices and started coding.
I think the issue is when a code review is “do this”. Especially when it’s small things like formatting, and I have to wait a day for the PR to be approved (or they catch another formatting issue).
It’s better when code reviews are suggestions, the reviewer doesn’t nitpick (or at least fixes that stuff themselves), and small things like formatting are enforced by a linter.
> Honestly, I’m not sure all of them are experienced enough to give a useful critique
I wouldn't let this be a stumbling block to asking for more review. I think it's like writing an article or paper - once you've invested a lot of time into the thing it gets harder to see the warts. Having another set of eyes can really help catch that stuff, even if they aren't at the same level of expertise as you. Back in my research days the department I was in had a person who's job was basically reading research papers and providing feedback - in this case she usually focused on the paper structure, clarity of phrasing, and of course grammar - she was an expert in writing not in CS. Yet looking back a drafts of papers before and after she had given input shows a massive improvement, even if the technical content was unchanged. Similarly a smart intern can be a great source of input for making my code clearer and better, even if the algorithm doesn't fundamentally change.
Also, if like me you are prone to falling into ego traps, inexperienced folks can bypass the ego's defenses. A simple "hey why did you do it $complicated_way instead of $simple_way?" can hit a lot harder than "hey make this $simple_way" from a mentor. I think it's because I've invested time into teaching the intern and when they ask a good question like that my ego gets a boost from "I taught that kid well" to balance out the "OMG YOU THINK I'M TERRIBLE AT THIS LETS FIGHT" response. (It may also be tempered because I tend to view the intern's questions as genuine requests for input, so I start thinking of the explanation and realize $simple_way will in fact work just fine.)
Code reviews can also have the opposite effect, I sometimes ask mentees of mine to review PRs of mine. Not necessarily to get their feedback, but to get their questions and to help transfer knowledge.
I used to do that when I started working on open-source as well, doing code-reviews to learn the code base (often I would not submit the review to not create noise)
I don't think it's something you should have to ask for. I mean if you went to a nightclub and the doorman let you in, would you say "Hey, aren't my shoes a little out of fashion?". To beat the analogy to death, I suppose if I was walking into a nightclub naked, on fire and carrying a bomb, I'd be a bit worried about the establishment of the doorman didn't refuse me.
So: is your code naked, on fire and carrying a bomb?
IME, if the tests don't fail the first time you run the code, it says that the tests aren't actually hitting the new behavior. (Maaaaybe 1 in 100 times the code is actually right the first time.)
Likewise, if there's no comments on a pull request, it hasn't been read...
I'd have agreed once, but at some point in my life I realized most of my code really was working right the first time, even messy javascript. Anyone else have this experience?
> I was tested 2 weeks ago for antibodies, my results were a “76” (I assume percentage?).
It does not mean percentage. The fact that we cannot directly relate antibody levels to immunological protection is one of the reasons we can't [yet?] use antibody tests to "count" as immunity.
While I posted above about being on the fence about vaccinating my kid, I was in a similar case as you. I'm in my late 40's, and I came down with covid early in the pandemic, before general availability of PCR tests, but I did manage to get an antibody test, and I was positive.
I did get vaccinated because the vaccine reinforces antibodies which are correlated with fewer complications upon subsequent infection. There are some epitopes on sars-cov-2 which closely resemble parts our own clotting factors and interferons, and the randomized nature of our immune response can lead to some less desirable antibodies being present. So, it seems a vaccine only helps.
One thing about getting vaccinated after recovery from covid, though, is that both the first and second doses knock you on your butt, not just the second.
> the randomized nature of our immune response can lead to some less desirable antibodies being present
Exactly! The point of many COVID vaccines is to teach the immune system to destroy a very specific and very important protein, which is the spike protein. Without that the virus can't enter cells. It's not present in our own body.
We have no idea what kind of immune response any given person might get with an actual infection. Reinfections are a thing.
Your last paragraph makes little sense. We have no idea what kind of immune response any given person might get with a vaccine (any vaccine) either. What kind of test this is? And infections after covid vaccines are sadly also a thing.
People get sick with COVID multiple times, and natural immunity is more hit-or-miss with regards to protection against variants. Reinfections are also frequently nastier than the first infection. It also costs absolutely nothing and at worst will act as an additional safety that makes you feel uncomfortable for a few days.
To provide additional protection (especially since given that timing it almost certainly wasn't the delta variant that you got). Maybe you are protected by your previous infection and maybe you aren't.
It was my understanding that you basically just punch these mRNA sequences into a computer and you could have a new formulation ready for manufacture over the weekend. If it's a regulatory issue, maybe the regulations should be adjusted. Why have a booster of the original stuff that's not working as well?
It's really not that simple. I'm not current, but I studied immunology as part of medical school a long time ago.
The Delta variant replicates really quickly, that is its evolutionary advantage over other strains, it's not that your immune system is weaker to it. What happens is that at first it ramps up a lot faster than your immune response, and during this period, it produce a lot more virions, and your immune system has a bigger fight on its hands.
There are many immunologists working on identifying the nature of the immune response to sars-cov-2, and so far, something on the order of 80 unique antibodies have been identified which are induced by vaccination, and about 200 unique antibodies which are induced by actual infection. The convalescent antibodies target the nucleocapsid (the ball) more than the spike protein.
It appears that the delta variant escapes approximately 1/8 of the antibodies in both cases, so you still have a very strong immune response, just one which doesn't ramp up as fast as the virus.
Now, as for the mRNA sequences, we can create them in a DNA printer (then transcribe to RNA), however, this triggers new safety and efficacy trials at the FDA.
Are you willing to bet that the tweaked Moderna vaccine (the trials of it was announced, right?) shows worse efficacy than the original? Odds 51:49 in your favor, null hypothesis of the same efficacy is a tie. That's generous I think, if nobody knows.
Most proteins are safe, of course, and the odds that any particular new mRNA target is safe is pretty good. But it's not trivial to predict with computational models, and so it requires testing.
I was tested 2 weeks ago for antibodies after receiving the vaccine. My results were "0." And yet I experienced all the typical side effects from the vaccine, indicating that I probably had an immune response to it.
Nobody knows what any particular antibody count means with respect to risk to your health from a future COVID infection. Instead, we need to be looking at the hard statistics about who's getting admitted into hospitals and who's dying.
The data I've seen is not great, in part because reinfection is pretty rare. This study [0] indicating rates of 31.0% and 3.4% respectively is the best I've seen, but I'd hazard a guess that there's some sampling bias - i.e., healthier people are less likely to get symptomatic reinfection and therefore to get tested, so unhealthy people are overrepresented in the data - so those rates are probably overestimates.
It is a free vaccine that could save your life, has been safely administered to billions of people and greatly reduces the odds of you getting infected, transmitting to others or ending up in hospital.
What is the downside? or are you just apathetic to getting it.
I've seen several patients that were sicker the second time they got COVID, including people dying in their 20s and 30s. I can't speculate on what their antibody tests might have showed beforehand.