Hacker Newsnew | past | comments | ask | show | jobs | submit | nonsequ's commentslogin

Well he’s calling for more education and awareness. It’s like Big Food. The entire CPG industry has been stuffing people with sugar, salt, and fat for decades, exploiting human nature. We are finally seeing society resist this as we have become educated about the dangers of sugar and salt consumption. Meantime, putting ethics aside, all the major food and beverage companies made hay for a long time until recently. What is ethical and what is money-making can be at odds for a long time before they converge (if ever).


I run an AHK script with a little over 2000 abbreviations (e.g. typing 'abbn' expands to 'abbreviation'). It helps me type 100+ WPM without too much strain.


That's a cool idea. Do you have your file posted anywhere? If not I may cook something up on my own, I'm curious to try it out.


AHK has been working great for me over the years. My addresses, snippets of emails, expansions are all stored in an ahk file. I use various email addresses for different sites; so a@a would become aj(at)ajonit(dot)com or a@l would become admin(at)learnqtp(dot)com. The possibilities are endless. Few year's back I created a video for my blog readers and published an AHK template. You may download it here http://www.learnqtp.com/get-productive-with-automation-autoh...


I think PG once wrote an essay explaining that most press is agenda-driven and there's somebody with money behind the story.

I wonder if that's the case here. F1 just passed out of Bernie Ecclestone's hands and has a new leadership team as well as shareholder base. They are aiming to revamp the sport and make it more exciting by addressing a lot of the issues brought up in other comments, like the rules stacking the deck in favor of rich teams and the fact that there is no streaming/digital option available to global fans.


Yes, sell-side (i.e. brokers like Goldman, Morgan Stanley, etc.) analysts do share their models with clients. They do this both in 'hardcopy,' pasted into published research reports, as well as 'softcopy,' sending or publishing the Excel file. Not exactly mainstream distribution, but 'public' in the eyes of the SEC and something you can find if you look for it.


What kind of demand do you think there is for a Linux-only mainframe?

My limited understanding is that most mainframe customers are locked-in, e.g. they have legacy COBOL code running their ledger system and the expense to switch off of it is simply prohibitive. That plus the fact that the system is reliable, low-maintenance, etc. preserves the status quo, despite the fact that if you were to write the same applications today, you'd choose a newer platform because it would be more cost-effective.

As such, IBM has historically offered z/Linux and co-processors only to hold onto data and processing that was being pulled off of the mainframe because it was too expensive/too onerous to do on the mainframe using z/OS and the like. So customers, unable to completely shut down their mainframe, could kind of make the best of a bad situation and at least get some cheaper Linux cycles out of their expensive iron.

If the above is true (and correct me if it's not) what is the appeal of a Linux-only mainframe? Or is it only interesting if it is radically cheaper than a z13?


> What kind of demand do you think there is for a Linux-only mainframe?

I think the answer really depends on IBM. Technically speaking, the hardware is rock-solid. There are no single points of failure in mainframe systems. But there are still some obscurities with the platform. There's a lot of packages you'd expect to be available in your s390x distribution of RedHat or SUSE that missing. And that's probably because the average open-source developer/maintainer doesn't have the means to develop/test on a mainframe.

IBM needs to realize they have an open operating system on a closed platform and the two don't mix. They are taking steps in the right direction, but time will tell if they get there. And they have to prove they can compete with the x86 guys on price.

> My limited understanding is that most mainframe customers are locked-in, e.g. they have legacy COBOL code running their ledger system and the expense to switch off of it is simply prohibitive. That plus the fact that the system is reliable, low-maintenance, etc. preserves the status quo, despite the fact that if you were to write the same applications today, you'd choose a newer platform because it would be more cost-effective.

You are mostly on track there. Mainframes haven't died because of many of the reasons you listed above. But I'd have to disagree with last statement about choosing a "newer" platform. Mainframe hardware is modern hardware, in the sense they are updated every one to two years. Mainframe operating systems are modern operating systems. They also get updated every one to two years. If you look at every industry that has been around for more than 30 years, almost all of their mission critical workloads are done on System z. Cheaper isn't cheaper when you have downtime.

But for some workloads, you are right: companies are only on mainframes because migrating to a different platform would be too costly. Even over 10, 15, 20 years.


This is interesting and I'd like to hear more opinions on it. My impression is that distributed computing has been eating Power/Sparc/Z processors' lunch for a long time now because software has made up for the deficiencies of coordinating 30 boxes. Do you and do any others believe that we are at an inflection point where the pendulum swings back in the direction of 'high-performance' processors like Power8, or will improvements in 'scale-out' ease-of-use and economies of scale continue to win the day?


The dominant use case for the last decade or so has been web servers hitting caches to do low-CPU low-causality CRUD operations. That looks unlikely to change in the next decade, so keep your Intel stock.

That said, for a lot of interesting use cases, like that king-hell postgres database sitting in the middle of the swarm, or video processing, or streams processing, or indeed any situation in which thousands-to-millions of simultaneous actors need to work on the same shared state, this sort of system starts looking real interesting.

As a thought experiment, think of this system like a GPU, except every single processor is a fully capable 2 GHz i5 running Unix, and instead of having to deal with the CUDA or OpenCL API, you can just write erlang (or haskell; .. or whatever) code and it will run. And instead of having 2-8G of RAM, you have 48G. And instead of having arcane debug tools, you have recon and gdb and ddd.

I don't think there is a pendulum, I think there's a spectrum and has always been one; pragmatism should always rule, and your use case is not my use case. There isn't going to be an objective winner ever, no matter how close Intel may get to covering much of the sweet spot.


If you have a problem that behaves poorly in the face of the Network Fallacies, then you want to scale vertically.

RDBMSes are a classic example. Some kinds of compute-heavy problems too -- simulations with lots of coupled components, video compression etc.


Are the rovers too far away to drive there and check up on it? It's amazing to me that it has taken us so long to arrive at such a tentative result on the existence of water on Mars. In a weird way, I find it heartening that there are such unexplained frontiers facing humankind, places where we still grope around blindly.


The surface of Mars is approximately equal to the land surface of Earth. It’s big.

http://www.msl-chemcam.com/index.php?menu=inc&page_consult=t... - Mars 101#.UvuUrHbfMdc

We have there only ¿three? moving robots there, but IIRC one is not moving and one is not working. And there are some (maybe 10) full static platforms, but I don’t know if any of them is working now.


There are four rovers that have successfully moved around.

Sojourner stopped working in 1997, Spirit stopped working in 2010, and both Opportunity and Curiosity are still moving around.


In its first year on Mars, Curiosity only travelled about 1.6 kilometers. So I imagine it'd have to be pretty damn close for them to risk driving her over there.

EDIT: source: http://www.cnn.com/2013/08/06/tech/innovation/mars-curiosity...


You're obviously not a scientist. If you were, you'd know we're groping around blindly on all sorts of frontiers :).


[insert pun exerting parallel between science and teenager on a date]

Hay-o!


> It's amazing to me that it has taken us so long to arrive at such a tentative result on the existence of water on Mars.

The article is completely misleading. The presence of water on Mars is not remotely controversial -- it has been detected in a dozen different ways.

Link: http://en.wikipedia.org/wiki/Water_on_Mars

Quote: "Water on Mars exists today almost exclusively as ice, with a small amount present in the atmosphere as vapour.[1] The only place where water ice is visible at the surface is at the north polar ice cap.[2] However, abundant water ice is also present beneath the permanent carbon dioxide ice cap at the Martian south pole and in the shallow subsurface at more temperate latitudes.[3][4][5][6] More than five million cubic kilometers of ice have been identified at or near the surface of modern Mars, enough to cover the whole planet to a depth of 35 meters.[7] Even more ice is likely to be locked away in the deep subsurface.[8]"


I'm interested in learning more about this. Does anybody have any thoughts? I think that, by intentionally posting unlikeable things, he guarantees himself 100% spam likes. On something that's actually likeable, these spam likes might be contained to a reasonably small proportion of likes. But what is the size of this shadow population of click farm users on Facebook? For these countries with large click farms to be the largest contingent of likes on big pages like David Beckham suggests they are not small enough to be manageable... I also don't know Facebook's revenue distribution well enough to gauge the business implications. Is it tilted towards small businesses that have serious trouble weeding out spam? Or is it concentrated with the large corporations that can garner large enough real audiences to ignore the spam?


I've worked on all sides of this: brand, agency, buying likes, selling likes, etc.

The reality is this: everything in this video is 100% true and it's been this way almost since the beginning. But most marketers are just checking a box. No one really cares about the budget spent on social ads. It's a line item next to display and TV.

I should say, this isn't just true with Facebook. The same thing happens on Twitter and YouTube. Google AdSense its he only platform that even attempts to sniff out fraud. No one else even tries, because no one cares.


I love the BS that Twitter tells you about the ROI of their ads. "70% of people who follow you will buy from you".. DUH... That's because the people who follow you are for the most part your EXISTING customers. The other 30% of course are bots.


The Google engineers are also trying to release their methods. This is not detecting 'fraudulent' activity, but rather trying to classify adversarial advertisements: http://research.google.com/pubs/pub37195.html (pdf is linked there). You can assume that they are doing the same types of thing to classify fraud.


I had the same thought. Unfortunately, the article doesn't make it crystal clear how much information the neural net uses to bet against you. It would be patently unfair if it actually knew your hand. According to the article, this is not the case.

I suspect it does perfectly count cards that it has seen in its hand or community cards. This seems okay as well-trained human players can do the same. I suspect the machine would get even more play if this were obvious, maybe if it played at a table with a real dealer and could read community cards on the table. The technology to do this certainly exists.


Counting cards in a poker game is really trivial, even for novice players, since you shuffle the deck after each hand, and in Texas Hold-em, once a card is exposed it is not taken out of play until the hand is over (unlike, say, stud games). All you have to do to count cards is look at your hand and look at the board.


What does IBM have to contribute to self-driving cars? Google's work has gotten by far the most attention.


I couldn't find anything about IBM working specifically on self-driving cars, but given their work on AI (people haven't forgotten about Watson already, have they?), it seems only natural that they would have a lot to bring to the table.


Self-driving cars involve vision, classification, most likely planning and logical reasoning, path finding, classical machine learning stuff.

Watson is natural language processing. Models of language, search, some amount of logical inference I presume. Not very similar to self-driving cars.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: