Sign in to follow this  
polarbear

Extra dump skills options

Recommended Posts

Please add an option flag to the current console "dump skills" one or both of the options below:

 

-c : Outptuts in CSV Format

-j: Outputs in JSON Format

 

Reason:

Allow output to be more "parsable" by for ex, csv can be used to to stats in some spreadsheet program, or JSON can be easily imported into many different programming language libraries. So more of us that do that kind of thing can more easily put up tools n such. 

 

 Note:

 please read this. Request is NOT to remove old format, just add 1 or 2 extra ones. More or less just a QoL suggestion for us nerds :P 

 

 

Edited by polarbear

Share this post


Link to post
Share on other sites

I am not against it. But in the meantime, I would suggest to just create a script (Unix/Linux shell scripts using sed/awk would be a way, also working on Windows with Cygwin dll or Gnuwin32) to create such conversion, and e.g. upload it to github, maybe asking Drogos whether he is willing to host it. I am bit too lazy and workloaded atm, but that would not take too much time, and am not familiar with json. What about xml, btw?

Edited by Ekcin

Share this post


Link to post
Share on other sites

(Quick and dirty CSV Conversion)

 

Open the output file in notepad++ (or any other half decent text editor)

 

Delete Rows 1--5

 

Then you just do a bunch of find and replaces using REGEX

Replace->Regular Expressoin (Wrap Around and Match Case Ticked)

 

Find

: (\d+.\d+) (\d+.\d+) (\d)

Replace

,\1,\2,\3

 

Find

  + (two spaces followed by a +)

Replace

 (blank)

 

Find

Skills,0.0,0.0,0

Replace

Skills,Current,Max,Affinities

 

Save as .csv with the character name and server

Enjoy

 

It's a simple enough process that creating a script to run it on a file that was just created shouldn't be too tricky

  • Like 2

Share this post


Link to post
Share on other sites

I've got a way now of doing conversion, but it would be nice to have the format naively. versus a mess of regex, which I'm using to get it "parsable" Also gives people the ability to make tools easier. Again just a QoL thing, instead of having mountains of spaghetti code to parse the thing. Also a problem is that some of the skill names are multiple words. so can't easily find fields by spaces. 

 

This is what I'm doing here to load up an entity (Java Spring JPA) to get the Skill name matched to the current skill. While this works, it is a mess, to be honest. https://github.com/bnorthern42/Verasm/blob/master/src/main/java/net/northern/verasm/web/rest/Utils/SkillDumpParser.java

 

Or if devs could through together a simple parsing guide, either way, that or a native parsable format would be awesome.

Edited by polarbear

Share this post


Link to post
Share on other sites
3 hours ago, polarbear said:

Jesus christ, what kind of madness is that?!?  533 lines?!?

 

The code snippet below is what I'd use (just save it as a .py file, make sure there is an INPUT and OUTPUT folder (you can change what you call them)).  Dump what you want to convert into input, it'll spit out the csv files into output (with the same names).

 

Spoiler

import os
import re

#Get our working directory and set up the folder names we want to read/write to
cwd=os.getcwd()
input_folder=os.path.join(cwd,"INPUT")
output_folder=os.path.join(cwd,"OUTPUT")


#Do this to everything in the INPUT folder
for root, dirs, files in os.walk(input_folder):
    for file in files:
    
        #Sets up the file paths to both read and write
        output_file_path=os.path.join(output_folder,file)
        output_file_path=output_file_path[0:-4]+".csv"
        output=open(output_file_path,'w')
        
        input_file_path=os.path.join(input_folder,file)
        
    
        #This calculates the length of the first 5 lines so we can cut them out later
        with open (input_file_path, 'r' ) as f:
            elimvar=f.readlines()
            elim_val=len(elimvar[0])+len(elimvar[1])+len(elimvar[2])+len(elimvar[3])+len(elimvar[4])
     
        #This edits the file
        with open (input_file_path, 'r' ) as f:
            content = f.read()
        
            #Replaces spaces with commas between entries
            content = re.sub(r': (\d+.\d+) (\d+.\d+) (\d)', r',\1,\2,\3', content, flags = re.M)
    
            #Trims spaces in front of data
            content = re.sub(r'\n\s\s+', '\n', content, flags = re.M)
    
            #Sets up column titles
            content = re.sub('Skills,0.0,0.0,0', 'Skill,Current,Max,Affinities', content, flags = re.M)
    
            #Debug to check output
            print(content)

            #Writes a truncated version of output without the first 5 lines
            output.writelines(content[elim_val:])

 

 

  • Like 1

Share this post


Link to post
Share on other sites
31 minutes ago, Etherdrifter said:

Jesus christ, what kind of madness is that?!?  533 lines?!?

That's not nice of you to diss someone else's work like that.

His code is much cleaner than you parser and it is part of a bigger system that he wrote.

Please don't be nasty if you can't see the big picture of where his code fits in.

 

  • Like 1

Share this post


Link to post
Share on other sites

This is my point. It's doable, just painful. a native CSV or JSON option would be awesome. JSON is best, but I'm not picky. Like NodeJS breathes JSON's , lol. 

1 hour ago, Etherdrifter said:

533 lines?!?

Much of that file is putting skills into a DTO (Data Translation Object). So I can store it in a PostgreSQL database. Not even taking into account affinities, at the moment, about to add another 200 lines with setters. Anyhow that's another topic for another day :P 

 

Edited by polarbear

Share this post


Link to post
Share on other sites

Tbh, I understand both sides, and do not want to denigrate Polarbear's work (and am convinced that it was not Etherdrifter's intention either). But indeed, the task is format conversion with a few automatic edits, as Etherdrifter described bit above in the thread. Using standard tools (awk and sed for example are available in Windows, as far as I know in the Linux compatibility suite which is part of Windows 10 and following) this can be done in less than 10 lines of code in a script, say "dump2csv", "dump2json", "dump2xml" which could be included in any process. I do not know a lot of java, but I know it can call external executables, which need close to zero processing time and resources.

 

Maybe this is too much from the point of view of procedural programming I am grown up with, but I fail to see that it is fundamentally wrong.

Share this post


Link to post
Share on other sites
9 hours ago, KillerSpike said:

That's not nice of you to diss someone else's work like that.

His code is much cleaner than you parser and it is part of a bigger system that he wrote.

Please don't be nasty if you can't see the big picture of where his code fits in.

 

Why would I mock someone else's code?  I just hate java because trying to do anything in it always turns into a tour-de-force.  Anyone who can get something working in that dumpster fire of a language is worthy of respect.

 

8 hours ago, polarbear said:

Much of that file is putting skills into a DTO (Data Translation Object). So I can store it in a PostgreSQL database. Not even taking into account affinities, at the moment, about to add another 200 lines with setters. Anyhow that's another topic for another day :P 

 

As Ekcin mentions, it might be better to upload the file, parse it in something else, then work directly with the CSV output using a separate function.  A "dumb parser" like the one suggested also means that as skills are added, you won't need to update your parse function, only the storage one.

 

It also means that if the Devs take your decision onboard, you'd just have to snip the parse call out.

Share this post


Link to post
Share on other sites
1 hour ago, Etherdrifter said:

It also means that if the Devs take your decision onboard, you'd just have to snip the parse call out.

I don't actually know what the devs use to store skills on the server side. I heard in the past they use MySQL for some server stuff. If they use that for player data, then they don't need so much of my code, it as simple as a few SQL statments to ouptput in those formats. I know PostgreSQL has worked hard the past few years in it's JSON handling, and if I recall MySQL can do nearly the same. 

Of course, this all depends on how the devs store that data. Who knows.

Share this post


Link to post
Share on other sites

At a guess, the "dump skills" command is something clientside and not serverside, as it is just pulling information that the client already has (i.e. an updated list of skills).

 

If that's the case, it's easy enough to dump it in different formats (.csv is just a text file with commas!)

  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this