This article is sensationalism dressed up as economic reasoning. In the opening paragraph, the author fails to make the distinction between profits trending toward zero versus prices trending toward zero in a free market. This undermines his argument even without the very real and substantial caveats he waves away later in the piece.
Second, while the marginal cost to sell one copy of a game is effectively zero in electronic marketplaces, the gross cost of making a game is never zero, not even for indie developers, because if nothing else it costs them their time. What we should then expect to see is a situation where, assuming a game is successful, a game sells at some non-zero price the market will bear until it earns its costs back, with steep discounts following. Of course, it is more than possible the game will simply lose money.
We should expect this to be true even in a world where games are sold purely in electronic form and those markets are flooded with free content (which is to say, developers willing to sell their games at a loss). Come to think of it, that's pretty much precisely what we observe in the games industry today. Where's the beef?
Gross cost is basically irrelevant when you are talking about pricing in a perfect competition. The point is that producers cannot set their price because if they set it any higher, somebody else will undercut them.
I think the correct mainstream economic interpretation is that games are highly differentiated, meaning that no game can completely undercut another game because they are never the same. This means that the perfect competition model and argument do not apply and thus MC != MR in the game industry.
Strictly speaking this is not correct. JSTOR and MIT cannot bring criminal charges; only the US government may. They can bring civil charges on their own behalf in addition to the criminal charges, and whether the 2 parties agree to settle out of court is relevant to any civil charges brought against the defendant. This is the basis for the difference between civil and criminal law - civil law covers disputes between 2 parties and serves as a framework if the 2 parties cannot reconcile.
On the other hand, criminal actions are not just about the grievances between the defendant and the victim of the crime. In principle, crimes are committed against the people themselves (hence the case naming scheme exemplified by the United States vs. X or The People of Illinois vs. X). Even if the victim isn't particularly interested in pursuing the prosecution, the prosecutor is still within his rights to try the case. Indeed, in many violent crimes such as rape, the victim is not interested in aiding the trial. While this can sometimes derail the prosecution, it need not. Pursuing the case absent the victim's full cooperation cannot ipso facto be considered prosecutorial abuse.
a) it is MIT and JSTOR that have primary discretion in whether a criminal case moves forward (it is the government's sole discretion)
b) MIT and JSTOR are in fact responsible for bringing criminal charges (they are not - they can only bring civil charges)
c) that the prosecutor is responsible for the severity of the penalty and/or the defendant's emotional response to that penalty
My disagreement with your third assertion was more implicit so let me clarify a bit.
First, it is not the prosecutor's job to question whether a law's penalties are in proportion to the crime it proscribes when deciding whether to bring a case. Discretion over the severity of the punishment is left to the sentencing phase of the trial if the defendant is convicted, and it is highly likely that Aaron's sentence would not have been the maximum had he been found guilty (a fact I am sure his lawyers made him aware of).
Second, under what reasonable standard can a prosecutor be held personally responsible for the emotional state of the defendant? Should it be acceptable for criminal defendants to pressure prosecutors into dropping cases by threatening self-harm or suicide in the hopes that a public outcry will harm the prosecutor's career? Try to ignore for a moment that the defendant in this case has your sympathies. Would you accept that tactic from a serial killer or rapist?
It may be that the law itself is unnecessary or counterproductive. I'm certainly open to the argument that at least publicly funded research ought to be open to the public. Yet it is still the law of the land. From the facts of the case, Aaron committed an obvious crime and behaved as though he knew it were a crime. The potential price of civil disobedience is that you will in fact end up punished for it. In the end, his story (like Rosa Parks and others before him) may end up bringing about the change he wants. But to say that the prosecutor abused her authority or was personally responsible for his death is an emotional response without basis and runs counter to the very idea of a criminal justice system.
I didn't make any of those assertions. Other people in this thread might have, but I didn't.
it is not the prosecutor's job to question whether a law's penalties are in proportion to the crime it proscribes when deciding whether to bring a case.
I didn't say it was. I agree that the prosecutor doesn't decide what the possible charges and punishments are; those are taken as given. But the prosecutor certainly does decide which cases to prosecute at all, and how aggressively to prosecute them. As I understand it, this case was prosecuted extremely aggressively.
under what reasonable standard can a prosecutor be held personally responsible for the emotional state of the defendant?
I didn't say he was (I realize others in this thread have, but I didn't); I agree he isn't. But that's irrelevant to whether or not this prosecution was way too aggressive for the actual harm done; IMO it was.
The potential price of civil disobedience is that you will in fact end up punished for it.
This is quite true. But it doesn't make the punishment fair or just.
to say that the prosecutor abused her authority...runs counter to the very idea of a criminal justice system.
Maybe it runs counter to the idea of a perfect criminal justice system, but the one we have is far from perfect, and prosecutors know that. In a perfect system, every instance of a given offense would be prosecuted the same, every defendant would get a fair chance to defend themselves, and we would have a reasonable expectation of a just outcome. In the system we have, because so many things have been criminalized, there are far more offenders of the letter of the law than can possibly be prosecuted, and defendants are at a huge disadvantage vs. the system. So who actually gets prosecuted, and what chance they have at a fair hearing, ends up being decided by the prosecutor's judgment, which is often colored by their personal beliefs or political leanings. Under those circumstances, IMO it is quite legitimate to question a prosecutor's judgment when a case is treated far more aggressively than seems warranted by the actual harm done.
Some things from this article just don't make sense to me.
Like, why is AJAX/client-side MVC pointed out as something that overcomes the HTTP request/response cycle? No, it doesn't. It just means you don't have to refresh the entire screen every time a request is made. Whether the server now sends back "a wad of HTML/javascript" or some JSON to be parsed by the Javascript MVC framework du jour, nothing you are doing is transcending the HTTP protocol either way. AJAX changed the way users interact with web applications, but it did not change anything really fundamental about their architecture.
Second, Coffeescript does not liberate you from Javascript. It IS Javascript. It makes it easier to control some of Javascript's difficult areas, like the meaning of "this" as execution context changes as one example, but it doesn't suddenly give you license to write browser code the way you would write Python or C++.
In any case, let's not confuse convenience with paradigm shifting.
The important difference I see in the approach used by backbone is it gets us away from having the server be in control of navigation and UI events. After the initial page load, I'm exchanging simple data as json with the server. It wasn't HTTP I was arguing with, it is having a server side framework managing the UI navigation.
The best way I've heard coffeescript described is as a better syntax for javascript. Though at first this may not seem like a big deal, in my experience it's been a huge deal in the effect it has on how I approach client side code. Now that I can write code I really love on the client, it's greatly lowered the barrier for me to do so.
I just see this as an incremental development, rather than a revolutionary one. As I mentioned above, the server is still very involved with UI events and navigation. If the user causes a UI event that saves data to the server, even if there is client-side validation it will often be the server that responds with an error and forces the page to do something with that. The nice thing is that we now get to encapsulate the server's influence on the application into definable actions, rather than by a single request. That's a rather good thing for us developers, but it is still well within the bounds of the traditional HTTP model.
As for Coffeescript, anything that tames some of the nasty bits of Javascript is fine by me. I don't dispute Coffeescript's usefulness. My point was just that Coffeescript is like syntactic sugar for Javascript, not a paradigm shift in client side coding.
No, I think it is in fact a Big Deal (tm). The old model was about involving the server in the behavior of the page. The new model is about turning the web into a classic client-server architecture, replacing the desktop executable with a javascript client.
The way I write web apps these days is to have static javascript code, that contacts a web service to fetch its settings, and to load the data to be edited. Then it performs all editing client-side, without contacting the server. When it's done, a single save() method call to the server persists the data. That the communication occurs across HTTP is almost irrelevant.
That's eerily familiar to the desktop apps I've implemented. In a sense, it's a paradigm shift back to the way stuff used to be built, except there is no dll hell, because the client always has the latest version of your "executable".
Well yes, now you can perform more manipulation of the page without needing a roundtrip to the server, but saving the state of the application still involves one, unless you are the type of radical early adopter who is going to try to do it all with HTML5 local storage.
The relative balance of the work may have changed, but I don't see it as a revolution except in terms of user experience, where AJAX has definitely drastically changed the way users perceive a web page.
Otherwise, the development model of HTTP - the server sends me something, I send a reply, and it sends me back something else - is still alive and well. That's to be expected, since browsers were built with HTTP in mind and the supporting technologies, Javascript included, all must live within that box. The server is still intimately involved with the behavior of the page, but its involvement is encapsulated into individual actions rather than an entire page load (i.e. the server returning an error can be responded to in the context of the HTML form that caused it, rather than with a page redirect or some such). Again, I don't deny that's a helpful thing.
Absolutely. Indeed risk is not even just about the "customer's" (in this case, the nation of Greece) inability to pay, but also counterparty risk. It's not just Deutschebank that's on the hook for defaulted Greek debt, but every counterparty who wrote them credit default swaps (CDS) on that debt. That's what triggered the financial crisis in 2008 - as Bear Stearns' and Lehman Brothers' cash flow from their debt holdings dried up, redemption calls for their CDS metastasized though the financial system.
If anyone is wondering why the core Euro-area (not to mention the US government) is so concerned about Greek (and Irish and Portuguese and Spanish and Italian) debt, it's less about some abstract political commitment to EU unity than it is the simple fact that a true default will destroy the European banking system.
Throwing cheap cash at the problem will work until the day that it suddenly doesn't. While the proximate cause to the original financial crisis was cash flow, that is not the ultimate cause for these problems. The problem is that risk in the financial system is (still!) extremely opaque, and it is not so much that banks are illiquid as there is almost no circumstance in which they could procure enough cash to meet calls on their outstanding CDS in the event of some "unexpected" event like subprime mortages or Greece going into default.
The US government learned the hard way what would happen if they allowed one of these overlevered banks to go under as they did with Lehman Brothers in 2008. On the other hand, national governments do not have enough capital to possibly cover the total liabilities in the financial system, nor can they even predict when and where that capital might be needed.
As with many things in life, there is no real good solution to this, and there is still lots of pain ahead.
There is a good solution, namely this, transparency. Just like the law requires manufacturers of food to label what goes into the box, so sellers of aggregated derivatives should label what goes into their CDOs, CDSs, etc. This would enable to buyer to have half a chance at assessing the risk attached. Seller won't do this however until they are forced to do it because they don't want to say what is in the secret sauce. The only reason I can fathom that someone would want to buy a "mystery gift" derivative is greed.
As Warren Buffet says, "I hold my nose and point towards Wall Street."
But the contents of each CDO and CDS was/is completely transparent to the buyer and seller. Maybe the buyers were less sophisticated than the sellers, but that's really too convenient an excuse. The whole CDO construction process was openly called 'Ratings Arbitrage' after all.
I agree that things should be transparent : And so the right thing to do is to force all CDS to clear vs. a central counterparty, with publicly known mark-to-market pricing. Similarly, banks should be required to mark-to-market on an arms-length basis. But somehow this legislation never gets passed...
From the looks of it, the program merely learned the rules of the game by doing textual analysis of the manual, and maybe got a few strategic hints as well. As for actually learning to play the game _well_, my sense from the article is that the program then used more conventional machine learning techniques to test and adopt winning moves.
I've been working on a new site in Pyramid (the successor to Pylons), and I have to say so far it's been an impressive experience. They took a bit of a different philosophy than Django and Rails. In those 2, you can customize some things, but there are also default options, and particularly things like the ORM are built into the framework. Pyramid behaves more like an application widget where the pieces of a modern web app (ORM, security, templating, URL routing) can be fit into its slots.
I did enjoy the article's touching on Django. It looks like it has gotten less monolithic than in prior versions. As I recall several years ago, Django really was like the Rails of Python, with tons of default options but some difficulties in customization. It looks like they have moved away from that somewhat.
That's not true about rails. Neither the ORM, templating or URL routing are built in. People use Datamapper or ActiveRecord, ERB or HAML and routing is handled as rack endpoints.
Rails 3 is much easier to pull apart and put in pieces that you want. I believe this will be even easier with rails 3.1 introducing engines as a first class citizen.
Glad to hear that. I'm just about to start creating a site using Pyramid. I've spent a lot of time evaluating frameworks and this one just seems to fit with the way we like to work
Not just a housing bubble but a bubble in the exact same industry sector as well!
Of course, a bubble is fundamentally a gross mismatch between future expectations and actual returns. That's why there's no learning from the past, and why it's always "different this time."
More importantly, bubbles don't occur in a vacuum. The 1990's tech bubble wasn't just because the internet was new, and everyone thought it was going to be great. The 1990's also featured low interest rates, the repeal of Glass-Steagall, and other loose policies conducive to asset bubbles. Sound familiar?
Such policies force money to chase yield, which means risk. As it does so, the market sends false signals about the true value of its companies, which reinforce themselves until some "Black Swan" event reveals the imbalances. In 2008, that event was the fall of Lehman Brothers.
As I noted below, if the national government continues to tighten its fiscal and monetary policy, then this bubble will be over before it really began. If, however, the government reloosens after the end of QE 2, it will most likely be full steam ahead.
Second, while the marginal cost to sell one copy of a game is effectively zero in electronic marketplaces, the gross cost of making a game is never zero, not even for indie developers, because if nothing else it costs them their time. What we should then expect to see is a situation where, assuming a game is successful, a game sells at some non-zero price the market will bear until it earns its costs back, with steep discounts following. Of course, it is more than possible the game will simply lose money.
We should expect this to be true even in a world where games are sold purely in electronic form and those markets are flooded with free content (which is to say, developers willing to sell their games at a loss). Come to think of it, that's pretty much precisely what we observe in the games industry today. Where's the beef?