[ Content | View menu ]

Cloud Computing’s Second Life

Mark Mzyk | February 27, 2008

While still musing on the possibilities of cloud computing, I started to wonder about online games, such as Second Life and World of Warcraft. A large barrier to entry into the online game industry is the sheer cost of the servers. Cloud computing seems to alleviate that burden because the hardware becomes cheap and easily scalable.

However, cloud computering presents a new set of problems, if it is to be used with online games. The question shifts from having the hardware to managing it.

Currently, when playing an online game, players are usually assigned to a server and then their online character exists on that server, but does not exist on any of the other servers that players might be playing on. Each server represents its own encapsulated world. With cloud computing, the physical servers of the game world would be shifting constantly, making it impossible to assign players to a server.

To overcome this, I image there being a set of permanent servers that do nothing but store player information, while the cloud computing servers run the game world, pulling and storing information about the player from the permanent servers as needed. Perhaps to speed up the process, since it would likely involve a lot of similar read and writes, the permanent servers could be set up in a map reduce cluster that parallelizes any work that needs to be done and provides redundancy.

Of course, this still leaves open the issues of running the cloud computing servers. Will the cloud computing servers each run an encapsulated version of the game world? If this is the case, how is it ensured that two friends who want to play together end up on the same server?

Or do the cloud computing servers run one massive version of the game world? If that is the case, how is communication between the servers handled? What if two players want to talk to each other while on different servers? How do the servers handle the load when all the characters in the game world decide to congregate in the same place, placing huge load on one server while other servers are under light load? How is the work split amongst the servers so the players don’t notice any glitches or slow downs?

An intermediate solution I can see is assigning players to the permanent storage servers that sit in front of the cloud. Players pick which storage server they want to reside on. Each storage server then spawns a unique instance of the game world using the cloud computing resources. The storage server then manages the cloud, growing the number of cloud computers used as more players sign on, throttling back when players sign off. There is still the issue with communication among the various cloud computers, but that issue could be resolved by having a permanent server that acts as a choke point, where all communication flows through it. There would be one choke point for each unique instance of the game world.

While perhaps not an ideal solution, it is the first that comes to mind. It’s a start towards solving a problem that will certainly take several iterations to get right. Of course, I’m just spouting ideas and haven’t actually undertaken the task of testing this out.

Any one else have a more elegant solution than what I have offered? It will be interesting to explore what software engineering is needed to solve this problem.

Filed in: General.

2 Comments

  1. Comment by Robert P. Kohut:

    Hi Mark. I realize this post is over two years old now; However, in that amount of time, technology has had a huge leap forward in terms of bandwidth availability and graphic card performance. I’m not sure if you’ve had the time to revisit possible solutions to cloud computing needs in the gaming market, but I think there’s definitely a better way to accomplish that task.

    In your setup, you try to push a currently working game server into the cloud. However, if you look at the current technology that is being translated to the cloud, the only thing being presented to the end user is the client. Let’s take a productivity software point of view. If I want to access a spreadsheet, I’m only able to view the spreadsheet client. I can load my files, edit and save out all of my progress. In reality, I don’t really care so much where my data is being stored, just as long as I’m able to reliably access it at a later point in time.

    Now we can translate that into an MMO standard. The idea would be to have the client readily available to the end user without them having to have the latest and greatest desktop system. In this case we defer all of the networking and graphical processing needs to the cloud. In this way, the gaming servers stay the same in terms of how they’re being accessed. What has changed is how the end user loads their game client.

    Two companies come to mind when thinking about playing games through the cloud. Currently OnLive ( http://www.onlive.com ) has implemented their own version, and has a fairly concrete plan in order to let players run their game on almost anything that has a screen and network capability. Unfortunately, they don’t specifically talk much about their technology, just what it will look like in the end. To learn more about the technology I turn to another company, OTOY ( http://www.otoy.com ), where they discuss the problem of streaming a game from the cloud across the net to your computer.

    How does OTOY do it? Without getting too much into specifics, their method is to run the game application through the latest GPU chip sets and convert the 3D world of triangles into a purely 2D format, then transmit a compressed data set across the net to your screen (with all the latest and greatest shaders, of course). To read more on that, there’s an article on Forbes at http://www.forbes.com/forbes/2009/1102/technology-otoy-videogames-software-game-changer.html This is technology at its best right now!

    How’s that for a more elegant solution?

    April 10, 2010 @ 15:46
  2. Comment by Mark Mzyk:

    Hey Robert,

    Thanks for the reply, even if it is two years after the fact. Clearly based on the activity this is still an interesting topic to consider. I have to say that I hadn’t given the problem much thought after writing the post. I don’t even remember what triggered me to write it in the first place.

    OTOY and OnLive are both interesting. They appear to have essentially solved the problem in the same way, by off loading all the processing to the server. It harkens back to the days of mainframes (which is just an observation – that is neither good nor bad). It is a more elegant solution than what I had thought up. I was too caught up in how games currently work that I didn’t think of their solution.

    I also found it interesting, from reading the Forbes article you linked to, as well as several other articles that I looked up, that OTOY is using downtime in the GPU to have it perform calculations for other games, so the GPU is always running near 100% (if I understood things correctly). That is an ingenious use of resources that also helps to keep costs down. I wonder if it could be applied to other cloud software outside of games. Of course, it is possible it is overkill in all instances except for games.

    Thanks for pointing this out Robert. It’s interesting information and I’ll have to keep an eye on OnLive and OTOY to see what they eventually release and how successful it is. It would be fascinating to get a glimpse at the software and hardware behind these companies.

    April 13, 2010 @ 16:05