Updating the Modular CV

Some time ago, I wrote about making a modular curriculum vitae in \LaTeX. Since that time, I’ve had to update the contents. Things change. Colleagues request current CVs to include in grant proposals, and given the current state of public sector employment it is no bad thing to have the CV ready to go.

But I’m now fighting a problem of separating content and presentation. There are different rules for formatting CVs and resumes, and I’ve done the wrong thing previously: I’ve copied and modified sections like employment history in order to change how the presentation happens. This is bad, because now any time I change something in my employment history, I need to make sure that every relevant copy gets changed. I needed some way to make it so there would be one and only one place where each piece of information would be kept, and apply that to different pieces of presentation code in the \LaTeX source.

The solution I found today is the datatools module for \LaTeX. This is a module that allows one to generate, read, and manipulate data stored in CSV (comma-separated-values) files. There is a lot of functionality in the module that I’m not using yet, but the ability to get data out of CSV and format it as needed is a big step forward for me.

I’ve created two CSV files so far, one to hold my education data and another to hold my employment data. The CSV files have more columns than will often go into an output format. For example, my education CSV has columns for my advisor name and my thesis title, even those don’t appear anywhere in an output yet. This will allow me to keep all associated data together, whether or not it is currently used. Previously, I simply used comments to add this kind of information close to what it relates to in my \LaTeX source.

I’m using various sources of good resume formatting to get ideas. Here’s the code to show my three degrees:



\vskip 0.125in
\noindent\textbf{\degree, \major:} \graddate, \university, \place\\

The “usepackage” line happens in the header. The “datatools” commands are only valid within the bounds of a document environment. I’m defining a macro “dtledu” to use in conditional statements. Within the macro, I skip an eighth of an inch down the page. I set the “datatools” package to emit a lot of debug information. The “DTLloaddb” command actually pulls in the contents of a CSV file. I first tried to use tab-delimiting, which is in the “datatools” documentation, but I couldn’t get it to work. I eventually went with all default formatting: commas for delimiters, and double-quotes for separating fields. That means that any text that has a comma must go in double-quotes.

The actual work happens in the “DTLforeach” command. It uses the data that was read in. One line holds the assignments from data from columns to macros. Then a block appears where I can use those macros in conjunction with \LaTeX markup. Each line from my CSV is iterated over and formatted as I’ve defined it.

So this gives me a way to keep one place where my education information exists, and just one place for my employment information to exist. That information can be read in and formatted in different ways as needed for getting just the right output I’m looking for.

Wesley R. Elsberry

Falconer. Interdisciplinary researcher: biology and computer science. Data scientist in real estate and econometrics. Blogger. Speaker. Photographer. Husband. Christian. Activist.