Depends on whether you are looking for effectiveness vs efficiency. YOLO et al. are optimised for general feature extraction. They are ridiculously good if you need to build something quickly. If you are looking for the most efficient way e.g. lowest processing power per image then use a network that is optimised for the particular feature you are trying to extract.
Highest accuracy. I was mainly wondering which model ML engineers reach for circa 2020 for classification tasks. Searching for classifiers brings up article after article presenting toy classifiers, but very little with a production focus.
I've seen people get good results with Efficientnet, and it scales to be quite large. In general, I look at whatever the best state of the art is on imagenet, but then look for implementations that look easy to use among the top few.
I have mentioned this before, this is a supply problem. If you do not gatekeep medical degrees with irrelevant non-academic barriers (volunteering, portfolios, admission letters) and mandate medical schools budgets to keep up with new demand, then there would be plenty of doctors and diversity of thought. Let the MCAT standardized test be the only signal for admissions and force the acceptance of medical accreditation based solely on technical competence. Medicine has a scale problem and nobody is trying to tackle it. The American Medical Association recently pulled their collaboration with Khan Academy to further keep up the barriers. At the end of the day, regardless of the complaints of big companies trying to seek "cheaper workers", software engineering as a field is still much more meritocratic than others. Attempts at unionisation and accreditation instead of leetcoding (kicking the ladder hmm?) has all gone absolutely nowhere. Be vigilant about attempts to gatekeep. For too long MBA and public policy institutions spewed the drivel that non-free-access accreditation is the end-all be-all, of the idea that if you do not genuflect before various admissions committees of expensive professional degree schools you do not have the right to take the accreditation exam, that you are a threat to public safety. Imagine where the world would be if the average admissions rate for computer science was 7%.
At that time software's AT&T style winner-take-all and network effect lock-in was still not that obvious. The PC was not yet the golden standard for desktop hardware and words like minicomputers and mainframes were not uncommon terms.
In 1997, what else was there for desktop hardware? Ok, I was an Amiga buff and waiting for Blizzard PPC to come out - but I was under no impression that mine was the golden standard for desktop hardware.
It was certainly dominant at the time, but not for that long, mostly early 90's. The decades before saw - by today's standards - rapid switching of dominant home, business and server hardware and corresponding OSses.
So it was not a safe bet in 1996 that Windows and Microsoft would still exist and be a big market player in 2020.
> So it was not a safe bet in 1996 that Windows and Microsoft would still exist and be a big market player in 2020.
There's an argument they're not. Now obviously windows still dominates the laptop (and desktop) OS market (and I'm not gonna claim Microsoft is doing badly or anything), but it isn't the dominant overall OS due to the rise of mobile and Android.
At wave of computing so far, from mainframes to minicomputers, from minicomputers to PCs and from PCs to mobile, the dominant market leader has been unseated. If viewed in this light, Warren's reticence is prescient.
For Windows you can, but Microsoft is diversifying risks and running their software on other OS and hardware, and Office is a major windows-independent revenue stream.
So I'd say 2020 -> 2040 Microsoft is a safer bet than 1996 -> 2016 Microsoft.
What was the alternative in 1997? None? Windows 95 was on every office desktop. It was the only desktop OS most companies bought. I can’t recall another desktop OS widely available. OS from Apple was a niche, mostly in education and desktop publishing, and on decline.
The point is that dominance was only a few years old, and it was not at all obvious it would last - the computer market had completely changed every few years prior to that, so assuming there was a risk it might again was not unreasonable.
In '96 people were still talking about how Microsoft had failed to understand the importance of the internet, and whether Windows '95 would fix that. The '95 DOJ consent decree also looked set to potentially severely reign them in. Apple still looked like a possible contender.
OS/2 still looked like a possible contender - I remember being at trade shows at the time and seeing how hard OS/2 was pushed, at a time where Microsoft was still a small upstart that had only bypassed Commodore in revenues a few years prior (and speaking of Commodore, even in 95-97, several years after their bankruptcy, people were still looking at whether Escom and then Gateway would manage to resurrect Amiga), and there were lots of people convinced IBM would swat Microsoft away like a fly.
In the corporate space, options like DEC, Sun and SGI were still pushing into the workstation space at high pace and making inroads downwards into more regular workstations - I saw this first-hand in computer labs filled with cost reduced SGI Indy's and a bunch of DEC workstations at work, and lots of SUN workstations at places I contracted in those years.
Non-Windows, non-DOS machines were still everywhere in those years. It was unusual to find an office without a non-MS OS, because if nothing else there'd by a Mac for Quark Express or the like. And the presence of beachheads of non-MS OS's like that meant that whether or not MS could maintain its position was still not obvious.
More importantly: Giants had stumbled many times before. Most notably IBM, but the years before were littered with computer companies that had either died entirely or were shells of their former selves.
Yet few predicted just how much the web would change things over the next 10 years, and then mobile computing over the 10 years after that.
If you'd have said in 1997 that the most valuable companies in the world would include Google, Facebook and Amazon you'd have been laughed out of the room, when you then said that failing toy company apple would top the list
Even outside the world of IT, if you'd have told people in 1997 that a new car company would emerge and become one of the most valuable car companies in the world, given it had been 30 years since the previous car company had gone public, you'd have been equally mad.
In 1997 the most valuable companies in the world included Shell, Exxon, Toyota and Coca-Cola.
Indeed, that's the point. You can't predict what's going to be the big winner of the next 20 years, but you can predict that some brands are going to be fairly safe and will at least not wipe you out without notice.
Sure you could have gone all-in on Amazon at IPO (ooh an online book cd sales company, with MP3s on the horizon), but you could easilly have gone all-in on Pets.com.
You could have invested in yahoo, after all that was the place that ran the web in the 90s - if you weren't on yahoo you didn't have a business. You could have piled into things like friends reunited, myspace, napster, all of which were just as likely to succeed as facebook, yahoo, or apple music.
Even dying companies like Blockbuster and Kodak took far longer to wipe out shareholder value than some new flash-in-the-pan companies.
> Fruits have high polyphenol content, and regular consumption is linked with beneficial properties such as anticancer, anti-leishmanial, and antidiabetic activities. Contradictorily, consumption of soursop fruit is correlated with the development of neurodegenerative disease (Parkinson disease), which is attributed to the presence of annonacin in the seeds.
This is a bog standard modern NLP problem, I love how all the comments here are on how impossible it is using traditional heuristics and quoting various pop neuroscience authors and anecdotes. You only need a sufficiently large language model (and fine-tuned on short location strings) and run Named Entity Recognition on the input. Tricky? Yes. But there is only a finite number of prefix names in the English language and should automated solutions fail, you can always fallback to using a natural language form. A large neural network based solution running in the cloud would be able to get to 95+% accuracy. It would not be perfect but is good enough for most use cases.