Hacker Newsnew | past | comments | ask | show | jobs | submit | more codingthebeach's commentslogin

It would be nice if you wouldn't derail the conversation with another tired old "but it's not as good as gcc when it comes to X". This is a CTP. You split hairs over minor points of standards compliance that many devs won't even use on gcc yet because the features are so new, and completely avoid mentioning that the productivity of Visual Studio (for both .NET as well as native C++, including cross-platform C++) arguably dwarfs any other IDE on the market. Have you used, I mean really used, a recent version of Visual Studio for C++ work? Or are you making this comment because you haven't and you find it threatening? Tell you what. You build your C++ app with Xcode 6 or command-line gcc for all it matters. I'll build mine with Visual Studio 2013 or later. We'll see who wins. By the way, I love gcc. Love, love, love. That doesn't mean I have to take pot shots at Visual Studio, which is tooled to the Nth, completely stable, and not at all the old restrictive MS dev environment of yore. The times have changed.


It is a fair question given the fact that Visual Studio 2013 doesn't even support basic C++11 features like constexpr. When doing multi-platform C++11 work the code usually must be dumbed down a lot if it needs to compile also in VStudio, e.g.:

- can't use many of the new initializer mechanisms in the class declaration (initializer lists, etc...)

- no constexpr

- std::chrono high resolution click isn't actually high resolution but only has millisecond accuracy

Complete list of missing C++11 features up to VS2013 is here: http://msdn.microsoft.com/en-us/library/hh567368.aspx

Microsoft should first complete their C++11 support before moving on to C++14, otherwise it will probably never get fixed.

Visual Studio is a fine IDE, but the C++ compiler lags behind quite a bit.


Microsoft obviously should NOT first "complete" their C++11 support since there may very well be more important features in C++14 than some arcane features no-one would use in C++11. Same applies to all the other compiler vendors and all versions of the standard.


Arcane features that no-one uses should not be in the standard but in some sort of optional extension, and a feature that is not supported by all major compilers might as well not exist, since it cannot be used in a portable code base.


The init list bug has been fixed for 2015 RTM. Also, I fixed the clocks back in 2015 CTP1 (alpha 1). Sorry about that bug, we just didn't get a chance to fix it in 2013 - in that release we were insanely busy converting the whole STL over to real variadic templates.


Nice! Thanks a lot for your hard work, it's really appreciated despite my ranting ;)


Parity on standard compliance matters because many of us want to compile our code on different platforms, and that is all we download MSVC for. FWIW, it is also the only thing I look for in the gcc and Xcode/clang change logs.


This is just outright disingenuous. Both books have ample violence and the Quran specifically commands followers to chop off the body parts of the infidels. I can quote you the verses if you'd like. The Bible has its share of violence and gore too. What game are you playing at?



Why isn't this on the front page itself?


Because it's from October 2008.


I don't think they would've chosen C# at all were it not for ASP.NET MVC. It was the MVC, more than anything else, that made lean-and-mean WISA stacks possible. C# with ASP.NET MVC under recent versions of Visual Studio is tooled to the Nth degree, performant, and yet and manages to stay clean. Not so under ASP.NET WebForms and even less so under classic ASP.


They used c# because Jeff Atwood was a c# developer. 8 or 9 years ago it was a lot less common to be effortlessly multilingual, changing stack was a lot of effort. Mainly because SO didn't exist! So you had to hunt through reams of text to find answers to simple questions, that now you google and the exact question has been asked on SO and answered. I can't emphasise how painful switching languages used to be.

I suspect if asp.net MVC hadn't been out, it would have been written in webforms.

I seem to remember asp.net MVC had only been out a couple of months when they started writing it or not even fully released, I was learning django at the time because MVC so was so obviously better than webforms. But because c# was so much faster than either ruby or python, when asp.net MVC turned up, and it was very, very good for a v1, I stuck with c#, as did many other .net programmers I suspect.

At the time SO was written rails and django were still new and slowwwwww, php was king and still mainly written by page without frameworks, Java who knows, and the MVC revolution was just starting.

I'm not sure what you think he would have written it in? I have a feeling if you hunt through coding horror you'll probably find a post from the SO start time period saying "use what you know". In Jeff Atwood's case that's C#. I know that Joel's company had written their own language that I think at the time was still in use, so they wouldn't use the fog creek stack. Which was still based on Microsoft stuff anyway.


I think you're really overselling the difficulty of switching languages at that time, it wasn't even that big of a deal pre-google, much less pre-SO.

That said, SO was indeed a complete game changer in terms of the rapidity with which one could get solid information.


There were a lot less tutorials, they tended to be for Linux/Mac, it was rare to find windows installers, some python libraries didn't work at all on windows without some seriously complicated compiler knowledge.

It might have been easy to go php > ruby, but C# > ruby meant massive amounts of pain or complety changing your environments. While my company at the time had VMs, they were all windows and it wasn't a case of simply spinning up an EC2 to play. The pain of leaving a MS stack was massive.

I'm using ruby/python a bit interchangeably as I was.playing with both, but even things like gem took a load of effot compared to now. Like a couple of days of fiddling doing nothing interesting, not programming but constantly hitting errors or tutorials not working as you'd got a step earlier in the chain slightly wrong. It was a huge barrier to entry.

The worst tutorials claimed to be for both windows and linux, but would link to an out of date windows installer and all the command line commands would be for linux. I once made the mistake of thinking I'd write little maintenance scripts in Python instead of VB, it was great until I tried to connect to a MSSQL environment, ended up in a world of pain.

Last time I span up node.js on a new windows computer it took me like 10 minutes. Everything has changed.


I can't speak to anything regarding Windows dev here as I haven't done any of that since the mid-90s, and even then it was just a short project.

That said, I see what you're getting at now and it isn't what I thought you were talking about previously. It looks like what you're talking about is it taking a few days (or so) to get a working environment up and running vs. nearly instantaneous now? If so, fair enough I totally agree.

I thought you were talking more about the more nuts & bolts parts, like it taking a few weeks to come up to speed on Python (for instance) instead of several months. I've always felt that claims of it being difficult to change languages or other type things (e.g. changing from Oracle to MySQL) were radically overblown, and these days w/ sites like SO it's pretty much trivial.


The main issue with Ruby is now historical. When I tried to pick it up a few years back (around the time SO was probably being written) it could be somewhat hostile to Windows users. Gems/what-have-you assuming the existence of the GNU compiler chain and assuming that they were running on a unix.

These days the landscape is a lot better, especially when it comes to Ruby.


I think languages have converged a lot, and when one language has a killer feature, then others try to copy it. CPAN was the big thing in Perl years ago, but Ruby has GEMS (maybe it always did) Python has Pip. Java loops used to need an iterator when I used it years ago, now they don't it is even getting functional features these days.


Atwood was (is) a C# developer, and before that he was a VB.NET developer. These days he also does some Ruby and a bunch of other stuff. His initial MS/.NET focus and more generic usability rants were half the reason I started reading Coding Horror. Which is 100% of the reason why I joined the StackOverflow beta. And blogged about it. And rewrote my site from the ground up in one of the ASP.NET MVC release candidates.

You're correct that ASP.NET MVC was new at the time, but with guys like Scott Hanselmann comparing it to an acoustic guitar production versus WebForms more full-studio approach, and with Phil Haack on the squad, and with the corporate blessing from Microsoft and promises of future integrations, this was not some fly-by-night project that required a lot of faith to invest in. It was a clear case where Microsoft got something right, thanks to a small team of talented devs.

Consider: for the first time in history, you had the ability to write clean C# code in a web context with full control of markup and proper separation of concerns, sans cruft or baggage imposed by attempting to stretch a "desktop" metaphor over an request/response medium. Don't underestimate the psychological effect of this on programmers who'd been spending their days in the trenches builting line-of-business CRUD apps with ASP.NET WebForms and unbridled VIEWSTATE. So to me, ASP.NET MVC was the clear and obvious choice for StackOverflow. If you accept that, what better language than the C# he already knew?


It was difficult for me to get past the self-referential (and reverential) tone of the writing, which felt more like an advertisement for the author's cleverness than a real discussion of the pros and cons of SSH, but it could be I woke up on the wrong side of the bed this morning.


I've always thought of a lot of DFW's writing as a kind of programming. There's the main narrative thread of whatever he's talking about at a given time-slice, then there are the "worker threads" he spins off from that in the form of random lateral jumps to stream-of-consciousness, the 24-page footnote containing a single run-on sentence, etc. Certainly in IJ, and to a lesser degree in his other works, there are multiple "paths" the thread of execution can take to reach the end of the book. Put another way: are DFW's footnotes intended to be read synchronously or asynchronously? If you read them synchronously, it tends to interrupt the flow of the main narrative (context switches are expensive). If you read them asynchronously, you miss a lot of context and, possibly, the author's intended ordering/synchronization of the ideas. And of course all of this is intentional, and gives his writing a "meta" dimension that standard English don't usually have (usually for good reasons).


Grew up in Boston and spent more nights in the stacks at the BU bookstore than I can remember (third floor). I never met him and wouldn't have recognized him if I had but I stumbled across IJ years later.

I think people tend to focus too much on the "gen X" factor when they put on their DFW hats, as if he were a precocious but emotionally fragile teenager who missed some key lessons in life. I think there was more vigor to DFW's vision of the world than that. He called it an "extrapolation" but it's an extrapolation that carries a lot of insight about the thing it's extrapolating from. Also he had too much humor, wit, and insight to be "gen-X" in the sense it's used in this interview. DFW was a once-in-a-generation writer. Maybe once in a lifetime. I would've loved to have seen what he could've produced by the time he turned 50.


Yay. I would really like to see Digg take a swing at this and said as much a couple weeks ago.

http://www.codingthewheel.com/internet/could-digg-replace-go...

Digg is the dark horse in this race. It will be interesting to see how well they can execute.


If that's true then G+ is more of a danger to Google than Facebook ever was. Using the index as a way to strong-arm publishers into using a half-baked Facebook clone may even work, short-term. Long-term this is one way Google can lose Search.


It is true, Eric Schmidt said: “Within search results, information tied to verified online profiles will be ranked higher than content without such verification, which will result in most users naturally clicking on the top (verified) results. The true cost of remaining anonymous, then, might be irrelevance.”


I assumed you were making that quote up, but holy cow, he actually said it.

My historical love of Google has been directly proportional to my unawareness of Schmidt's privacy philosophy. The man's gone completely insane, and makes a frankly woeful spokesperson.


Eric Schmidt tanked Sun, Novell, and Google. His best CEO ability is ejecting before the crash, so his effect on Google is not yet evident.


I post links to new blog posts on G+ with a nice lede, then gently push e-mail subscription and RSS on the site. That's all G+ will get from me until it's useful for more than SEO.


Please ignore everything Schmidt says. He's creepy beyond belief.

About G+ I hope more people give it the finger so that Vic Gundotra finally understands that you can only alienate your users so much before they go somewhere else.


Right now not using G+ isn't hurting publishers. Those that do use it get a little bit, but not enough that they have to use it. Google is playing the long slow game with this. By the time everyone has to use G+ to stay relevant in the SERPs everyone will be.


Linking a profile to your web page is not the same as using the plus.google.com website.

Verifying identity is simply submitting a form to Google.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: