Keenan

Developer
  • Content Count

    1990
  • Joined

  • Last visited

  • Days Won

    29

Everything posted by Keenan

  1. Are you able to open www.wurmonline.com ? The client's files are hosted on the same server. If you can't then there's not much we can do as there's something blocking it from outside of our control.
  2. I also think it's rather toxic to be discussing the removal of other staff.
  3. Thank you. I couldn't have said it better myself.
  4. The databases are stored in external storage. Running Percona MySQL from a docker container that mounts that storage as a volume.
  5. Switched my Shares

    So you're assuming Gamethrill is doing nefarious acts. Got it. As for being a journalist, you clearly have a bias and a conflict of interest. I'd welcome it if you didn't have a server running that benefits from your "journalism".
  6. Switched my Shares

    Okay, some clarification because people really love to read between the lines... First line of this: https://partner.steamgames.com/doc/features/keys Common practice for publishers to bulk-sell keys. Nothing that says I know Gamethrill's internal workings as a developer for Wurm. I don't even own shares in that company. That is all. Thank you.
  7. Switched my Shares

    Keys are usually obtained in bulk by the publisher of the game. The "shadily obtained" bit is either few and far between on some sites or a misconception because the keys sell for so low. The way these sites usually work is they buy quantity for less than retail and start off selling for a percentage of retail price. As time lags on and they have more inventory to move, they drop the price lower and lower. I don't know internals of Game Chest or Gamethrill but I do know the practice of bulk selling keys and the misconceptions consumers have about them.
  8. Switched my Shares

    Quick search finds an article that says Valve allows this practice. You have a terrible habit of posting false information and putting Wurm down for your own personal gain (your WU server) and I'd appreciate it if you cease both practices. Thanks.
  9. Switched my Shares

    Some of the replies. Just oof. *goes back to doing what he does anyway* And I'm still the server overlord at the moment. THE HAMSTERS ARE MINE TO COMMAND.
  10. I've done all the cost evaluations and presented them. I'm highly aware of the costs over Hetzner. With AWS, the savings is in reserved pricing. Reserved doesn't lock you in tightly either - there's room to scale if needed.
  11. We are in a situation where some servers are more expensive and some are cheaper. The pricing will likely be a bit higher depending on if we do reserved pricing, but fairly balanced as we have the option of choosing instance sizes that best fit server needs.
  12. Yes... traders... Oh, nothing... Edit: Ah heck with it. I've never liked traders. Never thought they're good for much besides "abuse". Just some personal feelings.
  13. Hello everyone! The test servers are officially in AWS with the Test Client now pointing to them! I will be removing the "Test AWS" client later today. I can already hear: "But wait, what about stress tests?" Well, I had to do some upgrades to our Storm server which has been doing quintuple-duty as a build server, artifact server, and test server x3. After the upgrades the test server VMs became quite unstable. Since I was so close to being able to open up test in AWS, I figured to just forge through and fix the issues as they arise! There will be periods of downtime as I still need to do some work here and there, but I'll likely just spin up another test server for me to brea.. add features to. That's the benefit of AWS! So where does that leave us progress-wise? Well I needed to add something to the list - the store! Can't forget that... Forums - DONE Wurmpedia - DONE WO Store - Not Started WO Website - Not Started WU Website - Not Started Build Server - In development Artifact Server - Not Started Test Servers (3) - Live in AWS, tweaks to continue Live Servers (14) - Tweaking and testing needed
  14. AWS Status Update

    PHEW. So it's been a busy week for me. I've been pouring over this since I made my last post - every free moment I've had. The end result is... TEST SERVERS IN AWS! (kinda) I hit a pretty big snag on Java's RMI being very wonky with being inside a container. Basically for those who understand it - docker defaults to a bridged network mode. I was fine with this and I set it up to expose and bind ports correctly. Everything worked except for RMI! Turns out RMI gets reaaaaaally picky about what you bind it to and what it can and can't resolve for names. So I had to switch to host mode for networking and boom! It's odd that I never ran into this problem before considering the current test servers run in virtual machines in a bridged network... (or did I use NAT... I forget) I have a bit more work now - like moving the config changes I made on the servers back into the code so they're set. I also will probably tear down and stand up these instances again this weekend to test all that out. Then there's the scripts for updating the container image and firing things back up, as well as shutting things down cleanly from the console. Once that's all done hopefully we can get people on there to help us test performance. The tinkers among you may have noticed the Test AWS client option under Choose Client in the native launcher. This will take you to the test servers as they are in AWS. If you do use it, keep in mind it's a construction zone at the moment and I may do hard shutdowns without notice as I'm fiddling and fighting with things. Also I plan on taking a final snapshot of the test servers as they are currently in their VMs before this is over, so nothing done on the AWS test servers will stay. More information to follow!
  15. Could you find and paste the stack trace of the crash?
  16. I believe it's just old data based on hardware limitations of the time. Sometimes when something's set, it just gets forgotten.
  17. To add to Retrograde's comment: Sometimes the fixes require data to be modified as well, which requires a server to be restarted. For example, moving Jackal required all servers to be updated and made aware of the new IP address. It's possible this would've been refreshed, but it's safer to just bring the servers down and back up again. As Retro said though, often a fix for one server or cluster is needed on all. Plus it makes me happy knowing all servers are running the latest code.
  18. This is known. The initial fix was simply to get them to stop murdering players through walls. A more extensive fix is being worked on.
  19. So March 1st I embarked on a wild journey to bring Wurm into AWS. I had figured it'd take me a few weeks to a month. Yeah, that didn't happen. We have now 14 game servers, three test servers, an artifacts server, a build server, and four websites. That's... well it's more work than I originally thought. We're looking at two ways of rolling things out to the cloud: Manual - This means to manually request resources and set up the server. Automatic - This means infrastructure as code. Once a system configuration is written, it will always be set up the same way every time it is run. One of the pros of automatic is that we can spin up new test and game servers as needed. Primarily useful for test servers! The way I've structured it so far also lends us to the freedom of huddling game servers together on one larger machine to keep costs down. It can be cheaper overall to pay for a beefier instance and put two game servers on it than to pay for two smaller instances for each. One of the cons of this is the time it takes to write everything. Especially when I'm forging new territory here. I mean I've deployed to AWS many times, but each challenge is different. Our infrastructure is vastly different than the last one I helped on. One of the pros of a manual roll-out is that there's no code to write. You put the time in, and thanks to things like ready-made server images, that time is reduced. Once you're done, it's online. A major con though is you'll have to manually duplicate any setup required and more complicated resources are more time-consuming. Recently I've moved the forums and the wiki to AWS, and both of these moves where of the manual type. This fit perfectly though as the resources were simple. The most time spent was transferring data and upgrading the platforms once established on the new servers. So with that out of the way, here's a breakdown of where everything is: Forums - DONE Wurmpedia - DONE WO Website - Not Started WU Website - Not Started Build Server - In development Artifact Server - Not Started Test Servers (3) - In development Live Servers (14) - In development My intention was to finish the build and artifact servers, which will clear up a host for us. I'm going to get those to a working state before jumping back over to the test and live servers at this point. Our outage this weekend really underscored the need to complete the game servers first. I did work with Hetzner to get spare hosts repaired though, so should a drive go down we have the ability to move the server to a working one. I hope this brings folks up to date with the effort. I'm not nearly as far along as I'd hoped to be by now. That's mostly due to a very busy first half of the year with travel and work. While my work will remain at a steady rate after this week, my travel plans are done for the rest of the year. That will open me up for more long-term development on this project. Thanks all, and happy Wurming!
  20. It isn't 3x (or 4x now) Freedom. Epic is called "2x" but it also has a different skill gain system. Jackal uses that system, and it is now 4x instead of Epic's 2x. You can't compare directly to Freedom gains. It was a bit of a miscommunication due to how we always call Epic "2x" internally. We will tweak the gains as needed to best match the intended Jackal progression. Edit: Over time the skill gains *are* faster than Freedom's, if that wasn't obvious in the above.
  21. I'll have to work on that. Thank you for the report! Seemed to be an easy fix. Try when you can!
  22. I can confirm. Any object close to intersecting with a cave wall has a chance to pop to the surface when the server loads. It's an issue we're aware of but isn't easy to fix at this time. The solution for now is to move items in from walls in caves. I know, not the best solution. I like my forges in walls!
  23. Thank you for posting this! I'm certain it will help folks who need to go through this process. Unfortunately, changing the certificate is something we need to do. It was actually changed a month ago but I suppose it wasn't an issue until now. It's really odd how this cropped up.
  24. Hi all! With Jackal being sorted out, I was able to complete the move and upgrade of the Wurmpedia! It is now humming along happily in AWS and should be available. Be sure to hard refresh (CTRL+F5) if you see any strangeness prior to this post being made. I was still working out the bugs from the upgrade! Thanks for all your patience today and happy Wurming!