• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by Keenan

  1. I can't reproduce this. I just reinstalled and everything works. Perhaps a bad install? What OS?
  2. That space is reserved for status text.
  3. Due to the major overhaul needed, it isn't possible to keep the old skin.
  4. Still in the middle of the update to roll it out.
  5. So we believe the issue may be around the London data center for our CDN provider. I've taken that data center out of the list and will continue to monitor the situation. What's a CDN? A CDN or Content Delivery Network is a service that hosts data for a site in data centers or "edge nodes" around the world. This means that when you hit, you're actually first asking the CDN for the nearest location near you. The CDN knows roughly where you are based on something called "geo IP location", or in other words they know roughly where you're coming from based on the address your internet provider is providing. It then returns a host near that location so that you get the fastest download speeds. In our case, we believe that the London data center is having issues, though I couldn't find evidence of this on our provider's website. We have several EU data centers selected though, so removing London shouldn't be too noticeable to folks in the UK. Especially considering the advice in this thread has been to "pin" the Moscow data center in the hosts file, which is pretty far from the UK last I checked! I recommend those who have had issues to revert the changes made to the hosts file and try again (give it a few hours from the time of this post) and continue reporting if you're having issues. What's most helpful is to ping or traceroute the host and give us the IP that you're getting so I can trace it to a data center. It will give me more information if I have to hit up support at the provider. Thanks for the patience as we try to resolve this issue.
  6. I'm really conservative when it comes to talking about lag reduction on Xanadu. We actually understand the problem there, but it *is* possible that there will be an improvement. How significant remains to be seen.
  7. The celebration (no pun intended) comes after all servers are moved.
  8. All is well again. Celebration is alive and talking with the other servers properly!
  9. Rebooting Golden Valley to resolve the Celebration connectivity issues.
  10. We're aware of and working on a connectivity issue with Celebration. Please refrain from travel to that server until we've resolved the issue. You *will* be unable to play if you sail to Celebration at this time.
  11. It's Cloudnet hosting or se and their plans on the page do not reflect the service we're being provided as we're receiving a custom package.
  12. The provider is called Cloudnet and I've worked with their team directly from the start. It's not like Hetzner or OVH where you put money in the vending machine and get a server out. They do monitoring and support as well. Celebration is being selected for this because it's not running as well as we'd like on AWS. Given the benchmarks on the new host, I suspect Celebration will be running quite well post-move.
  13. There will be an extended downtime on June 11th at 1pm UTC while we take the servers down and move Celebration to a new hosting provider. Since this is a trial run, I will refrain from delving too far into the details of the provider and only say that I’ve been impressed with their knowledge and willingness to learn about Wurm’s needs. I’ve worked closely with them for a number of weeks now and I have confidence that we will be impressed with the result of this test. Once Celebration has been running successfully on this new provider, we will announce everything and answer any questions. I can’t wait to do a deep dive post on what we’ll be doing for hosting and how it will all work. For now, happy Wurming! Update: Celebration has moved!
  14. Sorry I didn't catch the posts here - I'm usually dropping quick messages in Discord when things go unexpected as they did today. As for Rifts, while it's not our intention to interrupt player events, it's also something that's nearly impossible for us to plan around. We give a 30 minute shutdown notice for a reason, as that should be sufficient enough time to get to safety.
  15. It's important that a player is online at the same time as a GM so that we can gather the required information for a database and log dive. In this case there was insufficient information given for me to do my job and give adequate information to the GMs. TL;DR: If you put in a support ticket, please make sure you readily check on the ticket and that you're available. If in doubt you can always reach out to the GM team via email at
  16. It was a crash. The cause was identified as an edge case with tower influence and will be included in the update coming later this week. We don't expect it to be an issue again until then. So no, this isn't what can be expected.
  17. I was saving the bigger announcement for our hosting status update, but yes - AWS is not a viable solution for us. While in testing it worked fine, once we put more load on the system we ran into bottlenecks on the disk I/O that not even raising IOPS would resolve to a satisfactory condition. I'll give more information in another post over the next few days. I will say that I'm in the middle of testing one provider and in talks with a second.
  18. Hello all. We were made aware of Desertion's issues a few weeks ago, and at the time the plan was to roll it into AWS after Celebration's optimizations were complete. Today we had found that Desertion had degraded too far to continue waiting for our hosting solution to be ready and we decided to move it to a spare within Hetzner during maintenance. After moving the server it was unable to connect with other servers. This outage lasted about an hour until we found the configuration issue and corrected it. We will be posting an update on our hosting situation in the near future as we are currently in talks with other providers to see what fits us best. We've learned many lessons in our attempted move to AWS and we plan on using those lessons as we move forward. - Happy Wurming!
  19. A quick update. We rolled out the optimizations a day early due to needing to restart for Easter eggs to be available for folks. Unfortunately the database optimizations did not make enough of an impact on lag, so we are currently looking at the next steps to address this situation. I'll post a more meaty update when I have more information to share. Just know that we're still working on this and appreciate the patience as we work through such a huge change in server infrastructure.
  20. I've been aware of it for about a week. It's the same situation we had on Celebration where we suspect hardware but there's nothing definite showing up in our diagnostics (I just checked again today). Given the current state of the Celebration AWS uplift, we're waiting on that before we can really take action with Desertion. Should our optimizations on Monday go well, Desertion will be the 2nd server for uplift due to the situation. Update post:
  21. I can't speak for compensation, but I do want to discuss internally. The main issue with compensation is we cannot really do a single-server level of compensation. I mean we can do sleep bonus per-server, but that doesn't help after the fact since people may have gone to stay at a friend's deed because of the lag. Those people would miss out. And if we announced it beforehand and said "Hey! Come home so you can get free stuff!" then we'd have people flock to Celebration for it anyway. So anything we decide upon will likely be after we're done with all this host-moving and be applied globally. So it's not something I can promise, but it is something on my mind due to all this.
  22. Hi all, It's been a quiet week from us so I thought I'd give an update on what we're doing with Celebration, AWS, and future plans. On Monday we will be taking Celebration down to make some optimizations to the database. We are hopeful that this addresses the lag and that we can right-size the volume that hosts Celebration's data after a period of reviewing metrics. We can then continue with the AWS migration. I've been working on a back-up plan should the above fail, which is to convert the current infrastructure I've built into something that can be deployed anywhere - not just AWS. I don't think I've really said enough about how radically different the server configuration is in this new infrastructure, and how much easier it is to maintain. I didn't want to lose any of that in the event AWS doesn't work out for us. So for those tech-savvy people: I've been using Ansible to provision a VM locally to the same specifications as the instances I built in AWS. There's still some AWS dependency for now, but this gives us the freedom to look at other hosting options should we be unable to optimize the instance and database sufficiently to support Wurm. Just to be clear, the presence of this back-up plan is just that - as a fall back plan in the event that AWS cannot be optimized enough. We have not abandoned AWS at this time and we want to exhaust all of our options before we make that call. We just don't want to keep all our eggs in the same cloud. So that sums things up! Happy Wurming!
  23. As Alectrys said, it's more that the link has changed for servers in AWS. I've not had the time to set up a dummy nginx host to bounce people to the new link instead with a 301
  24. I keep asking to host a podcast or stream called "You think you do, but you really don't." - where I'd cover ideas that people bring up and say "No" to them in various ways. But alas, I've been denied my dream.