• Content count

  • Joined

  • Last visited

  • Days Won


Everything posted by Keenan

  1. So you've probably heard of the wonderful WurmAPI that Warlander created. You've also probably heard of the exciting new map generator made by Budda himself! Well, this isn't nearly as exciting, but it may be useful! MapViewer opens the files generated by WGenerator, or any other map generator that uses WurmAPI. News I've gone away from including IDE-specific project files. You'll find everything you need to build this in a handy maven config. I also went to Travis to allow for faster integration of contributions from the community. Lastly I added some info to the tooltips when you mouse over pixel (tiles) on the map! Rework is in progress. I'm adding functionality to help with generating map images via command line and tile them for use with map frameworks. Features Supports all current map views (Isometric (Wurm-style 3D), Terrain (flat), Cave/Ore view, Topographical) Toolbar to control various options in each view. Status bar/mouse over for map details! (Click on a spot to update the status bar, or check Show Mouseover) Save the entire map image, or just the current viewport under the File menu. Change Log v.1.3.3 [Nov 20 2016] Tool Tips & Build Info Added flower and grass info to tool tips and build info to the about tab. v1.3.2 [Nov 12 2016] Contributions and CI This release has been a long time waiting! Code maintenance includes: Features / Fixes: Command-line support from Yoplitein Smoother zooming and quality on mouse-over by andyneff v1.2.0 [Oct 27 2015] App Refresh! New Icon! Now attempts to use system look and feel by default. Open Map - reworked to be easier to use. Large Map Size - Will now warn when the map size is too big to load in current available memory. Save Images - Remembers the last folder, auto-generates image names for quicker saving, general improvements/fixes. Bug fix: PNGs save correctly now! Now uses threads for loading map files, rendering the map image, and saving map images. Displays a small box indicating progress or activity. (Not all API activities allow monitoring of progress) Name of map (taken from the folder the map files are stored in) now appears in the title bar. Map dimensions now appear in the new status bar. Zoom is preserved when the map is refreshed or view changed. Options window refactored into a toolbar Toolbar tabs also function as view changers Removed the auto-refresh checkbox. This is now default functionality. Removed Save Settings - this is done automatically. Changed Restore Defaults to Reset to fit the toolbar better. Map Details now show in status bar when clicking on Terrain, Topographical, and Cave View maps. Added Show Mouseover checkbox to toggle map details on a mouse hover. Improved scrolling and zoom. New Help & About menu Everything is also bound to hotkeys! No mouse needed (except for mouse over details) Terrain is the new default view. Normal is now Isometric. Releasing version 1.2.0 under the MIT License. v1.02 [Oct 19 2015] Usability Fixes Made file choosing more intuitive. Fixed a possible infinite loop bug. Added Auto Refresh Map to Options for real-time map refreshes as you change settings. To disable this behavior, simply uncheck the box. Added an actual version number too. v1.01 [Oct 18 2015] Cross-platform Fix Fixed an oversight for file paths on other platforms. Set the file chooser to default to the user's home directory. v1.00 [Oct 18 2015] Usability Fixes Initial push and release. Note You must have all of the files generated by WurmAPI in a single folder. This means all five map files (,,,, and should be in the same folder) GitHub: (Eclipse project files included, WurmAPI (wurmapi.jar and common.jar under /lib) Download latest version: (runs standalone, no compiling or libraries needed. Tested under Java 1.8, Windows.) Running Linux? Let me know how it runs! Possible Future Features Database interaction to place village map markers and other Points of Interest Overlay grid lines based upon custom settings Artistic effects (fog of war, old paper map, etc) Ability to draw on map images Ability to paint map terrain data on maps.
  2. I misunderstood Budda's reply and pushed a fix for this so that demi-gods reference their template deity favor. Sorry for the confusion.
  3. I verified the intention with Budda. It may have changed since the overhaul thread.
  4. Demi-gods do not have their own global spells anymore, thus it is intended for prayers and sermons from a demi-god to feed into the template god's favor pool.
  5. Valrei International. 087

    Having some experience with the ins and outs of Steam, we'd have to outright ban account/item/silver sales to be listed on Steam. Our current stance is obviously hands-off, but that wouldn't be enough.
  6. Wurm Service Announcements

    Just an additional note: It seems to happen with shallow waters as well.
  7. Instructions Install Ago's mod loader. You can find it here: Download New maintainer: Unzip this archive into the mod/ folder. You should see a set up like this when done: (Replace dedicated server with Wurm Unlimited\WurmServerLauncher\ for bundled servers) Wurm Unlimited Dedicated Server\mods\ Wurm Unlimited Dedicated Server\mods\deedmod\ Wurm Unlimited Dedicated Server\mods\deedmod\DeedMod.jar Start your server with the batch file provided with Ago's loader: modlauncher.bat Profit! See for settings.
  8. Hi Everyone, As previously promised, I've taken the time to write a postmortem of the stability issues Independence has had, along with our future plans for server hosting. Independence began to lag considerably some time before February 7th. We had done a maintenance restart as scheduled and had hoped this would fix the issue. It did not. Later in the day we restarted only Independence in an attempt to fix the lag. During this restart I rebooted the server that Independence runs on and upgraded packages. This didn’t resolve the lag either, which I was beginning to suspect was hardware-related. By February 8th, we had a fairly good idea that a drive in the RAID was failing, so we began to move Independence to spare hardware. That hardware was our old Bridges test server, for those interested. It wasn’t new by any means, but it did have fewer cycles on it and the drives were fresher. Independence lived here for about two weeks while we worked on restoring the previous hardware. In the end. Hetzner replaced the failed drive as well as the entire hardware, leaving just one of the older drives with all the data. I restored the RAID and we upgraded the operating system as well as all packages. I had done this on a test server already, but the intention was to make Independence the first server to get this treatment in quite some time. You may have recalled that we were planning a very long downtime in the future. This was to do the same to all other servers and get things back up to date. Well, more on that in a bit. In the end, Independence is still experiencing lag and we’re quite aware of it. I believe this to be hardware related once again, and we will monitor as it continues. The sad part is that all of these issues overshadowed the fine work Samool did to reduce lag across all servers. We can move Independence back to the spare hardware should it become needed, though I am trying to isolate the problem. If it becomes unplayable though, we’ll do the move. While Independence was taking up all my time, Xanadu wanted some attention as well. You may recall a few crashes experienced. Well, these were long-standing and known crashes that we were unable to trace before. Thanks to the scrutiny and diagnostics tools we’ve had running to single out lag hot spots, we were able to trace the crashes back and fix them. Finally. One of these issues actually took Celebration down back in January. All of this was completely unrelated to the issues Independence was facing, and yet it made our stability look pretty awful. Budda and I were working on solutions in the background, not just for our stability issues but for a number of other problems as well. Hetzner has not been the most reliable host for many here, with network slowdowns and hiccups. Even router outages and emergency work that left us helpless as people were unable to play. Working with Rolf, we developed a plan to move from Hetzner and onto a more stable infrastructure with Amazon Web Services. This move is in its early stages. I am in the process of writing the code for the infrastructure and I am planning on standing up our test instances there first. If all goes well, I can begin writing the live server infrastructure and we can write up a future plan on the move and the required downtime to make this happen. This is what I meant by that extended downtime in the future - instead of patching up old hardware, we will be moving to new instances in a reliable cloud environment. For those concerned, we plan on using the Frankfurt, EU location so the servers won’t be “moving” all that far. I’ve had a lot of experience with AWS and I am very excited for what this means. While we maintain backups right now, all of this will become more secure and easier to manage. We can allocate more resources to a specific server if it becomes needed, or scale back and save money if a server becomes less populated. It means flexibility and stability for Wurm now and into the future, especially with the option to purchase reserved resources. I’m excited and I ask for everyone to have patience while we work through this transition. If anyone is curious, I can detail the infrastructure a bit once I’ve ironed out the details. Until then, happy Wurming!
  9. Poll to gauge Player perception of the GM team

    I think it's fair to say that most of the time the GM team is spot on and damn good at their job. I'll be the first to admit that the GMs have far more limited tools than we developers would like them to have, and they do their best within the scope of those tools. You don't need to go further than a WU server to know this is the truth, and it would take a full-time developer a considerable amount of time to make the situation better. True that there are some tools we've held back from WU, but the majority are there. We have done some things, such as a log for deleted items. Data storage is constantly a concern though (for now), so we can't simply log every action everyone does and thus have the ability to completely undo every action. Just some developer insight. I've been honored to work alongside the GMs on a number of cases over the years due to my position as Game Server Administrator. My vote was "no opinion" as I'm staff and wouldn't want to add a vote that'd be considered biased. Edit: And just to be clear, my comment about the tools is meant to highlight their skill in dealing with day to day situations.
  10. Cant log in on laptop

    Brash_Endeavors is awesome.
  11. Cant log in on laptop

    Looks like a change to deployment intended for test made it to live. Those who have followed the instructions may have to do so again, but others should see the issue resolved. Sorry about that.
  12. Devblog: The Rest of 2019

    Dysentery for everyone!
  13. Devblog: Server Issues Postmortem & Future

    This is one of my pet projects that I do hope to complete. Since I have to do it all manually right now, I'll more than likely automate the system at some point. I'm not sure if it'll be live map as we do support events like mazes and such, but something that lets us put out a more scheduled dump of the maps would be amazing.
  14. Have Sindusk take over Epic

  15. Devblog: Server Issues Postmortem & Future

    Hi all. It's been a while since my last update here. I'm going to start by taking this quote from another thread: Let me break this apart into the main bits: No cloud optimization for Wurm This is true in general. Wurm cannot scale in the sense that we can't spin up say two or three instances to help Xanadu cope with lag. Yet this is not the only solution cloud has to offer. I'm mainly looking for the stability that comes with hosting on AWS from a network perspective as well as the ability to build in even more safeguards against data loss. It also means faster recovery in the event of a server outage. Hetzner doesn't give us a whole lot of options in that regard and the network has been abysmal for quite some time. The Costs The costs are a huge burden on me, as a matter of fact. One thing I did when talking to Rolf about this was show that, if done right, we can meet or beat the current hosting costs. This is primarily why it has been taking me so long - I'm trying to do more heavy lifting with code than infrastructure. It would be easier for me to shove things into the more expensive offerings, but it would be bad for Wurm as a whole to incur such a high cost in comparison. We are also looking into the three-year pricing to save even more money and to ensure that Wurm will be around for quite some time. The Silence It hasn't quite been three months, but I can explain the silence. First off, part of that has been me taking a leave. I got things to a point where I can start getting a test cluster stood up and we've had a successful connection test with my infrastructure in place. Since that time my focus was on Wurm Unlimited as well as some personal things that had come up. I hate to use this as a shield, but keep in mind that I work on Wurm in addition to a full-time job. I needed some time off from everything so I could come back to this fresh. My day job had me working deeply with AWS as well, and I've actually learned some new tricks that might help me with Wurm's infrastructure. I need to spend a little time with that and see if it'll be a better way than how I've been doing it. Going Forward My time over the next two weeks will be scarce, but starting in mid-May, I plan on diving back into this in all my off-time again. My current road map for this looks like: Server auto-deployment code. (We currently do a manual deploy to Test for the server) Data backup and restore from S3. (This will allow me to clone a server during downtime) Using the above, stand up clones of all three test servers in the new Wurm AWS account. Deploy a special test-client that connects to them After this, it will be a period of observation and hopefully some stress testing from all of you. I'll work with Retrograde and Budda on some kind of event with rewards, but the main problem with that is it may require several attempts. I'll mainly need to push the server to it's limits and see where any bottlenecks are. I welcome questions, comments, and criticism equally.
  16. The lands of Wurm are large, larger now than they ever were before. I've traveled to many islands across the map, and each one seems to be unique in its own way. The Wurmpedia Team is asking all of you to tell us a little bit about your home in Wurm. Where do you live in these lands, and why? Is it the mountains or wildlife? Do the people there keep you coming back, or do you prefer a more solitary life? If you could give one or two sentences to a new player that describes the best things about your home, what would they be? This is open to all servers, but for Chaos and Elevation, I would ask that you think outside of the warfare that is made in those lands, and think about the majesty or beauty that the lands hold. Try to remember that this isn't going to be for recruitment purposes, but to help describe the landscape and qualities of the entire server. The scope of this project is to crowd source a paragraph or two for each server, so be as descriptive as you want and don't worry about repeating what others have said. I had posted this before in the Wurmpedia section, but I hope that this reaches more of you. Feel free to click on that link and see what has already been said. We will be using replies from both threads when we tailor the paragraphs. Thank you!
  17. Devblog: Server Issues Postmortem & Future

    Will do! Update, Pt 2 So I couldn't put this down today at all. It's been about 13 hours total with an hour break for dinner. One of those days. And yet as I type this, I have all three test servers running in a sandbox and talking to each other. Samool was kind enough to give a test connection and it worked. This doesn't mean it's ready for you folks yet! I still need to move them to their final home. I managed to get the database updates for IPs and ports automated and I've got a path forward on auto-restarting the server for updates and recovery. A simple daemon will suffice. No, not demon. Hamsters are enough trouble. A daemon basically something that runs in the background. In this case, the daemon will watch to make sure all Wurm docker instances are operational. Since the docker instance goes away upon termination, if a server crashes the daemon will know. With a proper configuration, it'll know which server and can start it back up. At the same time, I can tell it to pull down the latest image - and I plan on using a repository for test and one for live, so that there's never a chance of test's code getting on live accidentally. Live will be push-button whereas test will be a continuous deployment pipeline up until the act of shutting down the servers. That'll still be manual on both sides. I've decided that keeping static IPs is actually for the best as well, yet I've written everything with the possibility of using ports instead. What that means is that if more than one game server shares a host for cost efficiency, they either need separate IPs or they need separate ports. I prefer the IP method, but ports are an option as well. The reason I'm okay with this is because of the EBS volume that stores all server data. This is something that can't be attached during an update either, so basically if the instances will need to be replaced then I'll have to delete the stack and recreate anyway. The stack will likely take 15-20 minutes to create, so honestly it's not a huge amount of downtime. If I'm doing something that requires it, then we'll just do an "extended 1-hour downtime". Finally, now that I'm this far, we can soon start testing for I/O and I can start building the live cluster profile. I'd still like to devise a way to auto-import the live data, but if the choice is to spend hours doing it manually once or spend days getting an auto-import right? We'll do the hours. I don't want this held up on some fancy thing that I'll probably use once. Once this is all done, I'll be turning my gaze at the shop, GV, and our build infrastructure. After that will be forums, WO and WU web, and finally Wurmpedia. That last one, I would like to take the time to address some requests that @sEeDliNgShas made, so it may take some time. I'd also like a sandbox for her to play in, since who doesn't like sandboxes?
  18. Independence Went Ka-Boom

    Working on it
  19. Devblog: Server Issues Postmortem & Future

    I feel I should add a little about the I/O solutions here. We're fully willing to make changes to the Wurm server to compensate for I/O issues. I'm less worried about the database and more worried about map saves as I mentioned. If I find that map saves are a problem and we can't work around it with code, then another option is to use either a provisioned drive or an instance with an attached NVMe. The latter is literally an SSD attached and the I/O speeds are on par with direct hardware. The main issue with an attached NVMe is that its ephemeral, and thus I'd have to copy the map to it before start and schedule regular copies to the EBS volume to ensure it's constantly backed up. Another issue is that the cost per instance goes up with that option, and I am being cost-aware here. We're willing to pay for the benefits, but the more frugal I am the better. Obviously! The provisioned drive also cost more, and basically what it does is allow for faster I/O speeds at a cost. That will be a bit more complicated as I'd have to do some benchmarks to see where our IOPS need to be. You essentially say "I need this much i/o per second" and pay for it. If you don't use it, you still pay for it.
  20. Devblog: Server Issues Postmortem & Future

    That's the intention! Status Update: Every time I touch this, there seems to be more work to do. That's okay though, I'm plugging along. Docker! Wurm's server build configuration pushes a docker image of the server to a repository. This means that every build will update the image's "latest" tag and give us a fallback point should we need it. Think of this as snapshots of the running build. This with volume snapshots means that if things go horribly wrong in an update, we can very easily set things back to where they were before the update happened. No one likes the world "rollback", but at least it'd be less painful than it currently is if we need it. Lately we've dealt with just handling fixing what went wrong. The poor GM team has had the burden of that, but I'd much rather lose 15 minutes of progress for the few who have connected than spend two weeks trying to catch everyone affected and fix their issues. More Docker! I've successfully ran our Oracle test server in a docker instance. It's using a docker instance for MySQL as well as an EBS volume for the map and logs. This is precisely what I've been after, but there's still some manual configuration that I need to script so that this happens automagically when a "stack" is started. I want to do as little manually as possible as human error seeps in. Plus I'm lazy, okay? Gosh. No, seriously - it's about human error. Downsides So far there's some downsides that I need to mitigate. For one, I want the servers to auto-recover. What happened to Indy yesterday really shouldn't happen in this new environment, so I need to solve that problem. At the very least, I want to make it so there's a number of people who can press a button to recover a server. We obviously want direct access to be restricted and not needed for basic things. I was hoping to use an Auto Scaling Group for this, but I had forgotten how restrictive those are when it comes to configuration, and I'd much prefer the network configuration I have over a mechanic that may not even work well for us. I mean the idea of it killing a server because of a failed health check makes me worry, so the idea is dead there. Another downside is that I'm using static private IPs. It was a way to make things work, but I really want to get them to be dynamic. The reason for this is so I can do stack updates instead of delete and recreate. The latter takes considerably more time. I want to minimize downtime for things like OS updates and such. Finally, there's the point Sklo has brought up a number of times. We need to slam the I/O and see what's going to happen. Samool has suggested that we basically work with thousands of items at a time. Given that item updates are one of the most costly things in Wurm, I think that might work for the database. Yet we'll need to also test map updates, so perhaps we can find a way to get a good number of you on this server once it's up and start digging holes! I'm not sure what we can do to reward such testers, but I'll bring it up with Retrograde. I know I'd prefer a good hundred people or so, alts or otherwise. Just enough to give a good live-server-ish test. Going Forward The plan now is to finish converting the manual configuration to automatic and then move the test servers over fully. I also need to get the logs into CloudWatch so we can set up proper alarms when things go wrong and give access to high level staff members to look through them when needed. I'd also like to get some monitoring going in CloudWatch, so we can tell when an instance is over-burdened and may need to be bumped. There's also the moving of our build server, which I've not even started yet - though that can wait until after everything else is moved. Finally, there's the special cases around Golden Valley - including the shop. That's all for now.
  21. Independence Went Ka-Boom

    We're prepared to make changes to the server to increase performance in the cloud if needed, which will end up in the WU code as well. So in a way, this will be a win for WU too.
  22. Independence Went Ka-Boom

    Not intentionally rolled back, but there may have been some data that wasn't flushed to disk. The entire system locked up and went unresponsive. Support tickets and we'll handle things on a case-by-case basis. Or I should say @Enki and his team will. O:) But it sounds like you harvested, and then it wasn't harvested? Wouldn't that mean more items? The tiles wouldn't reset - they'd simply not have a state saved around the time Indy went down. So anything before that wasn't a result of the downtime. It was about 1am EDT when it went down.
  23. Devblog: Server Issues Postmortem & Future

    This thing has more evolutions than an Eevee.