Hacker Newsnew | past | comments | ask | show | jobs | submit | jrirei's commentslogin

We did go for Ruby in order to increase developer productivity (having a very small team of just two developers), and good code quality/high test coverage. We were sure we needed to refactor a lot later on. So Ruby seems like a good choice. But Ruby is NOT good at waiting no a database / network latency. But I guess with PHP we would have had exactly the same problems.


Did you compare bare bones PHP with Ruby or PHP with a library such as Zend?

I curious because I'm considering recoding a large part of a website and are trying to avoid scaling issues.

Thank you for the answer.


You are right: "no cache" means no cache in the app layer, so no eviction logic and inconsistencies between an app-layer-cache and a database. And yes the best thing a cloud can offer is lots and lots of memory. That's where it's good. And I/O (disk & network) is where it's weak.


Thanks for jumping in, jrirei!


We do preload all data on client side as the client must never wait on server responses for better user experience. But we cannot batch on client side as we cannot foresee when the user will kill the client. And actually we have more than 40K parallel user sessions as players come back more than once each day.


The presentation was recorded by InfoQ. I hope they will make it available during in the next 1-2 months. There's a lot I said but did not put on the slides.

You are correct in your assumption that each Tile had been a record in a MySQL table (now it is a value i a Redis hash).

Actually we considered using a "blob" approach. But in the client we cannot batch requests as we cannot foresee when the user will simply kill the Flash client to go to some other site. So when a user request (i.e. a game event) arrives in the server there is no way to know if another request will follow. So we have to persist that change right away.

This is using a stateless server. In a later game called Magic Land we are going for a stateful Erlang server. There we keep the whole user state in RAM while the user plays and persist state changes every minute or so. Here we can do without any database and just use S3 for persistence. Works just great. On the upcoming Erlang User Conference we will give an update on that project and slides will be available at Slideshare next week, too. In the meantime please have a look at this old slide set to explain the concept in detail: http://www.slideshare.net/wooga/erlang-the-big-switch-in-soc...


Thanks for clearing this up.

> Actually we considered using a "blob" approach. But in the client we cannot batch requests as we cannot foresee when the user will simply kill the Flash client to go to some other site. So when a user request (i.e. a game event) arrives in the server there is no way to know if another request will follow. So we have to persist that change right away. <

That's an understandable dilemma. I love thinking about stuff like this and see how other people are dealing with these challenges, so please forgive my Sunday morning quarterbacking ;-) Wouldn't the issue have been solvable by creating an relatively simple persistent software layer between the app code manipulating the tiles and the backend storage? I understand that you moved to this model with your Erlang game, but I'd like to know if a persistence/caching layer was considered for the farming game?

More generally, somewhere in here is an idea for a great Node.js server project that takes coarse grained datasets from a contentious database and serves as an interface for finer grained portions of that data.


For Monster World we did not consider this as the game is running fine (or good enough) as it is now. During the last weeks we were more focused on reducing RAM consumption of our databases as that is currently the main driver of cost and operation overhead.

Regarding your idea: Wouldn't then the Node.js server have to keep the whole user state in memory?


> During the last weeks we were more focused on reducing RAM consumption of our databases as that is currently the main driver of cost and operation overhead. <

I can imagine that.

> Regarding your idea: Wouldn't then the Node.js server have to keep the whole user state in memory? <

Yes, but I think it would have several advantages:

(1) The Node.js server code could decide which working sets it keeps in memory based on very simple rules. The details of this would be abstracted away from the application code itself, because the app just issues read and write requests on a user's dataset. So in essence, by splitting up the problem in two, it becomes relatively easy to handle (and optimize) on each end.

(2) You just have to keep the active datasets wired in RAM and it wouldn't be necessary for the Node server to know whether a user has disconnected recently or not. All it knows is when the data was last accessed and it can then vacate RAM slots that have become stale. Compare this to Redis, which I believe just keeps everything in memory no matter what. So overall RAM usage would probably be considerably less than what you're doing now.

(3) The idea beats "dumb blob caching" such as memcache, because it makes small operations economical. It seems to me that Node is well suited for this kind of task since it makes it very easy to build small server scripts that handle a huge number of small transactions. This would probably mean you need less machines for the same amount of users.

(4) I believe it's relatively easy to implement replication and scaling.

Anyway, just an idea. I have no clue whether this works in practice ;-)


Yes, this might work. But I would be careful about replication and scaling - this could make things somewhat complicated. ;-)


Not necessarily:

Replication is basically just a provision for instant failover. Let's say that by policy the background data store (e.g. MySQL) always has a copy that is at most 10 minutes old. In practice it could probably be much more recent. So in general user data is safe but you want something very simple to prevent data loss and service disruption in the most common failure scenarios.

I believe the best paradigm is a replication buddy system between two given Nodes. Should a Node instance fail, the app can always issue the same request to its "replication buddy" and expect to get the same data. Implementing a replication buddy relationship between two instances should be relatively easy using a persistent connection between them, since Node is all non-blocking but still guaranteed sequentially executed code (=there will be no real consistency problem). Nodes could just notify each other when data changes in the background and they'd both always have the same data state. Granted, there would have to be some code to take care of what happens in different failure modes (probably the most complex aspect of the whole thing), but overall still very doable.

Scaling would be even easier: just put user IDs into different buckets, each bucket is a replicated instance. If this is even necessary.

And the beauty of it is that you have to implement this just once, no matter how many different server-side apps and languages you use. It would be a common piece of infrastructure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: