Hacker Newsnew | past | comments | ask | show | jobs | submit | iClaudiusX's commentslogin

I only use Amazon to browse selections, and then buy direct from the brand's site. Buying only from Amazon made sense in the bad old days when most sites didn't have their act together with payment processing and shipping. Thankfully, those days are long gone.

It's basically the reverse of the "browse in store, buy on Amazon for cheaper" experience.


I tried this while trying to buy a Kurta Pajama (Indian dress). Amazon price for the same thing was about 15$. Direct was about 50$!!


It used to be that a release was tried and tested. Now everyone's an unpaid QA tester months after the official beta period is over. And the folk wisdom of "wait until x.1" keeps getting pushed further and further back to the point of futility.


That's interesting because otherwise tedious chores like washing dishes or cleaning can feel meditative and therapeutic after I've had my coffee.


I came to a similar crossroads when I realized I did not want to switch from windows7 to 10 once security patches end in 2020.

I've been using Kubuntu 18.10 since January and have found it so much easier to get things done. I don't have to constantly reset privacy and telemetry settings to keep the OS's nose out of my life. It has a real shell, ssh, and terminal so I no longer have to muck about with cygwin, putty, and winscp. I can customize the UI to my liking, e.g. dark mode that's consistent across applications.

About the only thing that doesn't work is about 1/3rd of my Steam game library but it's a tiny sacrifice for the efficiency gained with a proper OS that respects your time and privacy.


The opt-out for rerouting is a truly awful design choice.

One time I was driving up through NYC during a heavy downpour where visibility was only a few car-lengths ahead. Traffic had slowed to about 30 mph on the NJTP and most people thankfully had their hazard flashers on. Then Google Maps decides to distract me and pile on the anxiety with multiple "press no in the next 10 seconds or else!" re-routing prompts to save at most 5 minutes.

Just yesterday I got a pop-under to "confirm" that a speed trap was still in effect, which of course obscured some vital information on screen for a few minutes until it finally gave up.

Demanding that your user take their attention off the road to fumble with a touch screen just to stay on the route they're comfortable with is dangerous, and if I opt out once it should at the very least snooze those alerts.


The increasing sprawl of non-search widgets invading the search result page reminds me of the AOL years where "the web" was funneled through a narrow portal controlled by one entity.

Having to scroll down past ads, unrelated news, unrelated youtube videos, and ever more of these info boxes has pushed the actual content I'm looking for out to the second page. It's made it much easier to use ddg as default and use the !g flag only when absolutely necessary.


And the downfall of AOL was Google because Google had a better product. In order for google to fall, you need a product that's better (or more of what people want; you need to give them a reason to change their search engine).


I have no trouble believing Google is going to fall.

Their results have gone into the toilet - I ragequit Google search about once a day and do something else like forum searches.


I stick with Google but increasingly try to tweak my searches to hit forums. They're just that much less likely to be made-for-AdSense content by a copywriter paraphrasing other information from the web.


Do you have any tips for this sort of search tailoring? It's something I try for, but I've yet to find any particularly good keywords to leverage.


Usually I add "reddit" to the search phrase and try to find threads / user-generated and hopefully more organic content this way.


reddit is good, and so is just "forum" which will turn up specialty forums that haven't been absorbed by one of the Borgs yet.


I often just add "whirlpool" which is a fairly reliable Australian forum that started covering telcos but these days will have things about cars, home maintenance, personal health, etc. Or I add "forum" and that can be enough to tilt the results.


I usually add site:reddit.com or site:news.ycombinator.com etc. Actually google had a way to search discussion gruops, but they removed this feature as forums don't pay for their adds I suppose.


Same. I cant tell if my questions are just getting more specific and technical, but Google search results have been getting pretty useless in the past year or so


I love how google likes to completely ignore what I'm trying to search for. I wish I could think of an example because it happens to me often, but I can't so I'll make one up.

Imagine you're searching for tail lights for your car or something, but you don't know the size, so you search "Astra tail light size". This might bring up headlights. Wrong but no matter, you'd go on to google "Astra tail light size -headlight -head" or something.

What Google seems to have been doing to me recently is ignoring those negated terms, ignoring quotes, and just giving me the same results again and again. It's really getting annoying. Google seems to assume it knows what I'm looking for, and that my search query is just completely wrong and not what I want.

Note that the car stuff is just an example, I'd expect Google to not give you headlights the second time. It generally but not exclusively happens to me when searching things that are more technical. ESPECIALLY when it's a consumer level thing I'm trying to get info on, Google likes to assume it's giving you errors and you're trying to fix it. Which makes sense for most users, but god it's frustrating when every combination of advanced search parameters you try does nothing!

Google search needs a checkbox or something to turn off it's cleverness and just do an actual search.


Absolutely. I was trying to figure out something to do with timeouts in an SMT solver called Yices, so I had search strings about signals and alarm and Yices - of course. Google decided that this was a generalized programming question and displayed a lot of stuff about signals and alarm handling that didn't relate to Yices.

How likely a search time is "Yices", ffs? Feels like something that exotic ("statistically unlikely") probably is meant to be in the results by default.


I had no idea what Yices is. So I Googled it - the first link is SRI's Yices SMT solver. I tried "yices alarm" "yices signals" "yices timeout", and all of them showed only links related to Yices in the first result page (various manual pages, types, etc). So my attempt at reproducing your experience has failed.


The top Google hit for "yices alarm" is currently the exact Hacker News comment you've replied to. I wonder if Google adapted its search results based on that very comment? Maybe their algorithms shrewdly give more weight to fixing search results when the context mentions Google ("I googled for...", "Google didn't work when...", etc.) and the site is high profile (like HN). That would be very crafty.


I am kind of happy and sad in the same time to know that it is not just me.

This is SO annoying.


Same here. It seems the websites that show up top are become more and more spammy and less relevant to my query. I keep seeing all sorts of one sentence hipster 2.0 sites that want me to believe they are a credible source of information.


... and that product will likely repeat the cycle again, on some schedule of another. Might be the 20+ years of Google, might be the few years that Medium was only modestly annoying, might instantly go to shit.

The problem is the business model breeds for this, and we end up replacing one abusive monopoly with another, until we can break that cycle.

For a time it seemed Free Software might ... free us ... from that, though as even that effort's biggest boosters (Eben Moglen, Bradley Kuhn, RMS) freely admit these days, we've been regressing of late, and at an increasing rate.

What's it going to take?


Searching symptom tracker, google tells me no no you mean symptom checker. No sorry I do need a way to track them not check them


Fully informed consent should be a bare minimum for this kind of work.

Tech should take a cue from the biomedical research community and adopt an institutional review board that can independently assess the ethical and privacy implications of the data they're collecting.

Apple could be a leader here and drag the rest of the industry forward.


> take a cue from the biomedical research community and adopt an institutional review board

Some sort of "tech IRB" is absolutely needed. Unfortunately, when this idea was brought up after Facebook's infamous "emotional contagion" experiment - that any kind of human experimentation needs some kind of ethical oversight - the common response was "everybody does A/B testing".

A more practical idea is liability. Let the insurance companies handle the problem with a UL-like certification process that enforces bare minimum standards for data privacy.

> he data they're collecting

[slightly off topic] re: the industry's insatiable thirst for More Data... Negativland recently[1] return to us in our hour of need!

[1] https://www.youtube.com/watch?v=sTWD0j4tec4


The "hole" is a part of the ventricular system that produces cerebro-spinal fluid that circulates within the brain and spinal cord.

https://en.wikipedia.org/wiki/Ventricular_system


Most of it is useless (I also prefer minimalist hardware). However the new x570 chipset for AMD motherboards requires active cooling as the TDP has increased from 5-7W to 11W in order to support PCIe 4.

https://www.anandtech.com/show/14161/the-amd-x570-motherboar...


The root problem is the culture at Boeing and the FAA has shifted from safety first to profit first.

The investigative reporting from The Seattle Times[0] indicates that safety engineers were pressured to avoid delays to rush out a competitor to the A320. Furthermore, their safety analysis was based on flawed assumptions to meet an artificial constraint of not requiring pilot simulator training in order to appease the airlines they were selling to. Finally, the FAA is allowing industry to self-certify critical systems with lax oversight.

It is easy to get lost in the technical details of why a particular catastrophe happens. The common throughline is a broken culture where deviance is normalized and those who speak out are ignored. It's the same story with Chernobyl, Fukushima, the El Faro, the USS Fitzgerald and USS John S. McCain, Air France 447, and now the 737 Max.

[0] - https://www.seattletimes.com/seattle-news/times-watchdog/the...


"The common throughline is a broken culture where deviance is normalized and those who speak out are ignored."

The must read on the issue says so too.

"The Seven Signs of Ethical Collapse: How to Spot Moral Meltdowns in Companies", Marianne M. Jennings


Thanks for the steer. Wasn’t aware of this. Just read her presentation and loved it.


Fukushima? That doesn't belong on the list. There is some limit to any engineering decision. Complaining about MCAS is totally reasonable, but it would be unreasonable to argue "The Air Max is not safe because if I hit it with enough Stingray missiles it won't fly anymore." Like, yeah? No kidding?

Fukushima was designed to survive the earthquake, and it did, it just wasn't designed to survive the earthquake and also the tsunami.


Fukushima survived the earthquake and even survived the tsunami. The generator got wiped out, but even that wasn't what ultimately led to the disaster. It was that the battery backup eventually ran out of power (not unexpected) and the connectors for recharging it were old and of a format that isn't used any more. There was no way of recharging the battery backup and so the pumps eventually failed.

It's one of those problems where there are literally a million things that could go wrong and since the emergency system is not used normally, it's easy to overlook a critical problem.

So I agree with you. Fukushima was not a design error -- or at least not a design error that could have been reasonably fixed at the time that the reactor was originally designed. It was an error in maintenance. Obviously better to have a design where loss of power doesn't cause a melt down, but I don't think that these were available when Fukushima was built. CANDU reactors existed at that time, but I think they were still considered experimental. Pickering came online in 1971, so basically at the same time as Fukushima. I'm not familiar with other passive designs, so possibly someone else can make an observation.

But basically, as far as I can tell, Fukushima was a reasonably normal nuclear power plant for the time it was designed. The Air Max seems to have suffered from problems because of design decisions that are not considered normal.


> since the emergency system is not used normally, it's easy to overlook a critical problem.

This is a such an important antipattern when robustness is a goal.


Totally agree. Done it myself more times than I care to admit. One small quibble, if I may. Originally "antipattern" used to mean something that looks like a good design pattern, but will actually bite you in the end if you used it as intended. This is not so much an anti-pattern as it is an unfortunate reality (you have to maintain compatibility with external interfaces for the length of the project). How much bit rot have I seen in my career?


I was thinking of set-it-and-forget-it backup systems as the antipattern, as opposed to e.g. designs that regularly force the "backup" system into active use under controlled circumstances. The battery connector represents sort of a backup of the backup though, so it may not be a good example of what I was thinking of.


But it could have been quite easily by simply siting the backup generators above ground. That was a stupid design error. Tsunamis are not unknown in Japan after all.


I refrain from using “simply” or “just” unless I am the person expected to design or fix the problem. Ahead of time. Saying after a disaster caused by the most powerful earthquake ever recorded in Japan[0] that the solution was “simply” to do some coincidentally simple-sounding thing is not credible.

[0] https://en.m.wikipedia.org/wiki/2011_Tōhoku_earthquake_and_t...


Yeah, this is something that I think doesn't really resonate with people well. The reactor site is 25 meters above sea level. I'm not exactly sure how high the generators were, but they were well above the level that experts thought was safe at the time. The earthquake was a 1 in 1000 year event and so there was no data on record to help them model the resultant tsunami. In the years following the tsunami, the way people modelled waves radically changed based on the new data.

There are a couple of caveats. First, there were markers saying that an historic tsunami had come in much higher than models would have predicted. However, the are very old. It's just a rock stuck in the ground with some writing on it. Stuff like that is all over Japan (there are lots of markers around where I live -- I don't think anybody pays any attention to them at all. Probably we should, but usually they just mark boring stuff ;-) ). It's like seeing a roman road marker in Europe. Interesting, but not really note worthy. It's only after the tsunami that people saw the markers and said, "Holy cow. There's a marker here showing that a tsunami came up this far". Even then it's a far cry from seeing that to saying that we need to invalidate all our wave building models.

Secondly, I think there is some evidence that in a few years preceding the tsunami that researchers were getting worried that their wave models were not correct. I think it's even the case that nuclear plant companies were aware of this. When I first moved to Japan in 2007, there used to be a section of the Meteorological Agency of Japan that showed, among other things, a map of the farthest in a tsunami would theoretically go for all parts of Japan. It also listed maximum wave size for every single place along the coast. It noted places where sea walls were not high enough and estimated worst case damages and numbers of casualties. Around about 2009 it disappeared. I tried to find out where it went and the response I got was that it needed to be updated and that it would return at some point in the future. Of course, it never came back. At the same time, I've heard that literally a few years before the Tohoku earthquake that there was serious debate about whether or not the wave models were correct. However, I think it's pretty clear that in 1967 when they started construction at Fukushima they had absolutely no idea that they were building in a potentially unsafe area.

It really sucks and I think it's fair to say that as humans we probably have too much hubris when it comes to our science. That fact that you have no reasonable way of knowing that you are making a mistake doesn't mitigate the problems that result from that mistake.


Agreed. I think the background you shared shows one of the big problems in design: it's always easy to see the tsunami markers after the fact and say "the information was there all along". It's a lot harder to pick them out from a thousand information sources that are, a priori, just as compelling. Looking for those markers before design would have required looking at every bit of evidence at least as compelling as those markers, which would likely be cost-prohibitive.


FWIW dreamcompiler said "simply siting the generators above ground".

That doesn't mean it's easy to realise the need to do that action, it means it's not a _complex_ action in itself.


It's actually kind of interesting. After the disaster, the nuclear power plant near me moved it's backup generators literally on the top of a neighbouring hill -- it's about 100 meters above the entire complex. And they made another backup one on another hill. Once you know the problem, it's not hard to fix it. I admit to being a bit worried that now the generators are too far away from the power plant so that when an earthquake happens (we're 50 years overdue for a regular terrible earthquake) it will be disconnected. The irony will be lost if it happens, I'm sure...


> Fukushima was designed to survive the earthquake, and it did

untrue

It was designed to survive both a tsunami and an earthquake. Tsunamis often are caused by earthquakes.

That Fukushima survived the Earthquake is a myth. The plants had an emergency shutdown and there was very little time for a damage assessment, which would have taken weeks or months.

Whether the plant would ever have been restarted after the earthquake is unknown. It could have been a full loss, like several reactors in Japan, which will never be restarted.


The plant lost electrical connection to the grid, of course it had an emergency shutdown. Otherwise they'd have had to have found some other method of dissipating megawatts of electricity.

The fact that other plants have not been restarted is at least as likely to be political as it is technical.


> The plant lost electrical connection to the grid, of course it had an emergency shutdown. Otherwise they'd have had to have found some other method of dissipating megawatts of electricity.

A nuclear power plant always has an immediate shutdown in case of a strong earthquake:

'Japanese nuclear power plants are designed to withstand specified earthquake intensities evident in ground motion (Ss), in Gal units. The plants are fitted with seismic detectors. If these register ground motions of a set level (formerly 90% of S1, but at Fukushima only 135 Gal), systems will be activated to automatically bring the plant to an immediate safe shutdown.'

http://www.world-nuclear.org/information-library/safety-and-...

> The fact that other plants have not been restarted is at least as likely to be political as it is technical.

The words 'likely to be political' is no category in nuclear safety.

Take this example from 2008:

http://www.world-nuclear-news.org/C/Tepco_counts_earthquake_...

'Tepco's announcement yesterday included a section dedicated to the effects of the magnitude 6.8 Niigata-Chuetsu-Oki earthquake, which violently shook the Kashiwazaki Kariwa nuclear power plant on 16 July 2007. All seven of the reactors remained safe during the event, which caused huge damage to the region and several deaths. However, checks to establish the units' safety to return to service are proving very lengthy, and could continue into the latter part of 2008.

The ongoing inspections at Kashiwazaki Kariwa are to cost ¥122 billion ($1.13 billion) in FY2007. In addition, ¥25 billion ($233 million) will go on civil engineering repairs while a geological survey of the site is to cost a total of ¥2 billion ($18 million).'

Just the inspections after a safe shutdown for that nuclear power plant did cost more than 1 billion USD...


Though one might argue that the risk of tsunami is not independent of the risk of (certain kinds of) earthquake for pacific rim nations. Failure to take that into account might be considered a design decision.


There were enough reports over the years that were buried which alerted about the earthquake and tsunami risk.


> The root problem is the culture at Boeing and the FAA has shifted from safety first to profit first.

So the same problem that pervades society everywhere now? I’m not sure if that wasn’t the case before, but it feels to me that people previously wanted to make lots of money by building great products, and they’ve just left the ‘building great products’ part behind.


There are still companies that do that; the ones I'm aware off are mostly from Germany and Japan. Like assembly line robots, but also Panasonic (and maybe Fujitsu; haven't tried them for a while, but I used to be a big fan of their 2-in-1's P1510 rang) laptops (especially the Japan-only ones). They would not sell anywhere else because they are crazy priced, but they are virtually indestructible and go on forever.


Despite the 737 Max fiasco, airplanes today are far safer than ever before. Since 1970, annual deaths have been cut by >80%, while air traffic has increased by a factor of 10.

Cars have seen similar improvements. So have food hygiene, workplace safety, and most any measurable safety record I can think of.


You are using statistics wrong here, FAA process changed , Boeing entered a panic mode so their process changed too, applying statistics from the past where processes were different(FAA did its job and Boeing was no cutting corners) is incorrect.

We need new statistics, and this are the stats for the recent Boeing plane where the MAX crashed 2 times, first crash was blamed on the pilots and no serious urgent investigation was performed, the MCAS issues that were discovered were trivialized, still Beoing is trying to shift t6he blame on a bad software and not on the actual causes .

My point is that you can't use statistics that way, there are rules on how to apply them correctly and many pitfalls when applying statistics in real world scenarios.


No, I'm using statistics exactly as intended. Last year was much safer than any year in the 20th century, and so will this year be.

You have some ideas about how everything used to be better in the past, and you're trying to hold onto them in the face of overwhelming evidence to the contrary.

There isn't some sudden increase in the pressure to earn money that wasn't there in, say, 2008. And while the 737 Max process was obviously flawed, the argument above was that somehow there are fundamental problems across companies and industries, not just a single model. Quote from above:

> So the same problem that pervades society everywhere now? [I]t feels to me that people just left the ‘building great products’ part behind.

While evidence of a breakdown in Boeing's ability to design safe planes would indeed lag, there are many other critical processes where regressions would show rather quickly, such as maintenance, fuel quality, air traffic control, IT security, etc.


Maybe you want to state it more clearly, like in math so I can have a better idea what you are talking about. I think I missunderstood you anyway, because I was thinking at airplane crashes statistics and not at the entire world industry.

In statistics you need some basic things to be true, like you need true random independent events, or sometimes a theorem apply only if the distribution of the events is known to have some properties.

What I understood you were implying (I was wrong, sorry) and other comments too, is that if we look at Boeing crashes (not MAX) for latest N years (but we exclude older incidents) then we can conclude things about the future statistics of the MAX.

About the rest of the industry I agree that most things got better. But for example in car industry there was a lot of competition and a lot of regulation bout safety and pollution that forced the companies to be better, if you had only 2 car making companies and have them self approve then we may get in the same situation as with the MAX.


That is exaggerated. It doesn't pervade everywhere.


The difference is, most products aren't safety critical like aeroplanes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: