• Content Count

  • Joined

  • Last visited

  • Days Won


Keenan last won the day on September 2

Keenan had the most liked content!

Community Reputation

2769 Rare

About Keenan

  • Rank
  • Birthday 07/02/1981

Profile Information

  • Gender
  • Location
    (final String _locale)


  • Chaos
  • Exodus

Recent Profile Visitors

5481 profile views
  1. I've done all the cost evaluations and presented them. I'm highly aware of the costs over Hetzner. With AWS, the savings is in reserved pricing. Reserved doesn't lock you in tightly either - there's room to scale if needed.
  2. We are in a situation where some servers are more expensive and some are cheaper. The pricing will likely be a bit higher depending on if we do reserved pricing, but fairly balanced as we have the option of choosing instance sizes that best fit server needs.
  3. Yes... traders... Oh, nothing... Edit: Ah heck with it. I've never liked traders. Never thought they're good for much besides "abuse". Just some personal feelings.
  4. Hello everyone! The test servers are officially in AWS with the Test Client now pointing to them! I will be removing the "Test AWS" client later today. I can already hear: "But wait, what about stress tests?" Well, I had to do some upgrades to our Storm server which has been doing quintuple-duty as a build server, artifact server, and test server x3. After the upgrades the test server VMs became quite unstable. Since I was so close to being able to open up test in AWS, I figured to just forge through and fix the issues as they arise! There will be periods of downtime as I still need to do some work here and there, but I'll likely just spin up another test server for me to brea.. add features to. That's the benefit of AWS! So where does that leave us progress-wise? Well I needed to add something to the list - the store! Can't forget that... Forums - DONE Wurmpedia - DONE WO Store - Not Started WO Website - Not Started WU Website - Not Started Build Server - In development Artifact Server - Not Started Test Servers (3) - Live in AWS, tweaks to continue Live Servers (14) - Tweaking and testing needed
  5. AWS Status Update

    PHEW. So it's been a busy week for me. I've been pouring over this since I made my last post - every free moment I've had. The end result is... TEST SERVERS IN AWS! (kinda) I hit a pretty big snag on Java's RMI being very wonky with being inside a container. Basically for those who understand it - docker defaults to a bridged network mode. I was fine with this and I set it up to expose and bind ports correctly. Everything worked except for RMI! Turns out RMI gets reaaaaaally picky about what you bind it to and what it can and can't resolve for names. So I had to switch to host mode for networking and boom! It's odd that I never ran into this problem before considering the current test servers run in virtual machines in a bridged network... (or did I use NAT... I forget) I have a bit more work now - like moving the config changes I made on the servers back into the code so they're set. I also will probably tear down and stand up these instances again this weekend to test all that out. Then there's the scripts for updating the container image and firing things back up, as well as shutting things down cleanly from the console. Once that's all done hopefully we can get people on there to help us test performance. The tinkers among you may have noticed the Test AWS client option under Choose Client in the native launcher. This will take you to the test servers as they are in AWS. If you do use it, keep in mind it's a construction zone at the moment and I may do hard shutdowns without notice as I'm fiddling and fighting with things. Also I plan on taking a final snapshot of the test servers as they are currently in their VMs before this is over, so nothing done on the AWS test servers will stay. More information to follow!
  6. Could you find and paste the stack trace of the crash?
  7. I believe it's just old data based on hardware limitations of the time. Sometimes when something's set, it just gets forgotten.
  8. To add to Retrograde's comment: Sometimes the fixes require data to be modified as well, which requires a server to be restarted. For example, moving Jackal required all servers to be updated and made aware of the new IP address. It's possible this would've been refreshed, but it's safer to just bring the servers down and back up again. As Retro said though, often a fix for one server or cluster is needed on all. Plus it makes me happy knowing all servers are running the latest code.
  9. This is known. The initial fix was simply to get them to stop murdering players through walls. A more extensive fix is being worked on.
  10. So March 1st I embarked on a wild journey to bring Wurm into AWS. I had figured it'd take me a few weeks to a month. Yeah, that didn't happen. We have now 14 game servers, three test servers, an artifacts server, a build server, and four websites. That's... well it's more work than I originally thought. We're looking at two ways of rolling things out to the cloud: Manual - This means to manually request resources and set up the server. Automatic - This means infrastructure as code. Once a system configuration is written, it will always be set up the same way every time it is run. One of the pros of automatic is that we can spin up new test and game servers as needed. Primarily useful for test servers! The way I've structured it so far also lends us to the freedom of huddling game servers together on one larger machine to keep costs down. It can be cheaper overall to pay for a beefier instance and put two game servers on it than to pay for two smaller instances for each. One of the cons of this is the time it takes to write everything. Especially when I'm forging new territory here. I mean I've deployed to AWS many times, but each challenge is different. Our infrastructure is vastly different than the last one I helped on. One of the pros of a manual roll-out is that there's no code to write. You put the time in, and thanks to things like ready-made server images, that time is reduced. Once you're done, it's online. A major con though is you'll have to manually duplicate any setup required and more complicated resources are more time-consuming. Recently I've moved the forums and the wiki to AWS, and both of these moves where of the manual type. This fit perfectly though as the resources were simple. The most time spent was transferring data and upgrading the platforms once established on the new servers. So with that out of the way, here's a breakdown of where everything is: Forums - DONE Wurmpedia - DONE WO Website - Not Started WU Website - Not Started Build Server - In development Artifact Server - Not Started Test Servers (3) - In development Live Servers (14) - In development My intention was to finish the build and artifact servers, which will clear up a host for us. I'm going to get those to a working state before jumping back over to the test and live servers at this point. Our outage this weekend really underscored the need to complete the game servers first. I did work with Hetzner to get spare hosts repaired though, so should a drive go down we have the ability to move the server to a working one. I hope this brings folks up to date with the effort. I'm not nearly as far along as I'd hoped to be by now. That's mostly due to a very busy first half of the year with travel and work. While my work will remain at a steady rate after this week, my travel plans are done for the rest of the year. That will open me up for more long-term development on this project. Thanks all, and happy Wurming!
  11. It isn't 3x (or 4x now) Freedom. Epic is called "2x" but it also has a different skill gain system. Jackal uses that system, and it is now 4x instead of Epic's 2x. You can't compare directly to Freedom gains. It was a bit of a miscommunication due to how we always call Epic "2x" internally. We will tweak the gains as needed to best match the intended Jackal progression. Edit: Over time the skill gains *are* faster than Freedom's, if that wasn't obvious in the above.
  12. I'll have to work on that. Thank you for the report! Seemed to be an easy fix. Try when you can!
  13. I can confirm. Any object close to intersecting with a cave wall has a chance to pop to the surface when the server loads. It's an issue we're aware of but isn't easy to fix at this time. The solution for now is to move items in from walls in caves. I know, not the best solution. I like my forges in walls!
  14. Thank you for posting this! I'm certain it will help folks who need to go through this process. Unfortunately, changing the certificate is something we need to do. It was actually changed a month ago but I suppose it wasn't an issue until now. It's really odd how this cropped up.