Keenan

AWS Status Update

Recommended Posts

So March 1st I embarked on a wild journey to bring Wurm into AWS. I had figured it'd take me a few weeks to a month.

 

xhpry8K.png

 

Yeah, that didn't happen.

 

We have now 14 game servers, three test servers, an artifacts server, a build server,  and four websites. That's... well it's more work than I originally thought.

 

We're looking at two ways of rolling things out to the cloud:

  • Manual - This means to manually request resources and set up the server.
  • Automatic - This means infrastructure as code. Once a system configuration is written, it will always be set up the same way every time it is run.

 

One of the pros of automatic is that we can spin up new test and game servers as needed. Primarily useful for test servers! The way I've structured it so far also lends us to the freedom of huddling game servers together on one larger machine to keep costs down. It can be cheaper overall to pay for a beefier instance and put two game servers on it than to pay for two smaller instances for each. One of the cons of this is the time it takes to write everything. Especially when I'm forging new territory here. I mean I've deployed to AWS many times, but each challenge is different. Our infrastructure is vastly different than the last one I helped on.

 

One of the pros of a manual roll-out is that there's no code to write. You put the time in, and thanks to things like ready-made server images, that time is reduced. Once you're done, it's online. A major con though is you'll have to manually duplicate any setup required and more complicated resources are more time-consuming. Recently I've moved the forums and the wiki to AWS, and both of these moves where of the manual type. This fit perfectly though as the resources were simple. The most time spent was transferring data and upgrading the platforms once established on the new servers.

 

So with that out of the way, here's a  breakdown of where everything is:

  • Forums - DONE
  • Wurmpedia - DONE
  • WO Website - Not Started
  • WU Website - Not Started
  • Build Server - In development
  • Artifact Server - Not Started
  • Test Servers (3) - In development
  • Live Servers (14) - In development

 

My intention was to finish the build and artifact servers, which will clear up a host for us. I'm going to get those to a working state before jumping back over to the test and live servers at this point. Our outage this weekend really underscored the need to complete the game servers first. I did work with Hetzner to get spare hosts repaired though, so should a drive go down we have the ability to move the server to a working one.

 

I hope this brings folks up to date with the effort. I'm not nearly as far along as I'd hoped to be by now. That's mostly due to a very busy first half of the year with travel and work. While my work will remain at a steady rate after this week, my travel plans are done for the rest of the year. That will open me up for more long-term development on this project.

 

Thanks all, and happy Wurming!

  • Like 10

Share this post


Link to post
Share on other sites

Automagical - Any time you create a build process that is manual, the man or woman doing the build will make mistakes along the way.  Automatic is worth the extra time.  The agility and flexibility you gain in the future is so worth it.  Imagine special events, where you pop open new servers for a week of special treasure or beast hunting.  Toss up a test server full of sweet WU mods.  Spawn servers at the snap of your fingers, like a true god, not these false gods we pray to in Wurm. (insert chanting sounds)

 

Pretty exciting stuff Keenan.  Thank you for all the work on this project!

  • Like 1

Share this post


Link to post
Share on other sites

PHEW.

 

So it's been a busy week for me. I've been pouring over this since I made my last post - every free moment I've had. The end result is... TEST SERVERS IN AWS! (kinda)

 

I hit a pretty big snag on Java's RMI being very wonky with being inside a container. Basically for those who understand it - docker defaults to a bridged network mode. I was fine with this and I set it up to expose and bind ports correctly. Everything worked except for RMI! Turns out RMI gets reaaaaaally picky about what you bind it to and what it can and can't resolve for names. So I had to switch to host mode for networking and boom! It's odd that I never ran into this problem before considering the current test servers run in virtual machines in a bridged network... (or did I use NAT... I forget)

 

I have a bit more work now - like moving the config changes I made on the servers back into the code so they're set. I also will probably tear down and stand up these instances again this weekend to test all that out. Then there's the scripts for updating the container image and firing things back up, as well as shutting things down cleanly from the console. Once that's all done hopefully we can get people on there to help us test performance.

 

The tinkers among you may have noticed the Test AWS client option under Choose Client in the native launcher. This will take you to the test servers as they are in AWS. If you do use it, keep in mind it's a construction zone at the moment and I may do hard shutdowns without notice as I'm fiddling and fighting with things. Also I plan on taking a final snapshot of the test servers as they are currently in their VMs before this is over, so nothing done on the AWS test servers will stay.

 

More information to follow!

  • Like 3

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.