saraallain
Archival-ish
9 posts
Don't wanna be here? Send us removal request.
saraallain · 8 years ago
Text
Report from Code4Lib 2017, or: take me back to the sunshine
I spent last week in beautiful Los Angeles getting all sorts of inspired by Code4Lib 2017. If you don’t know Code4Lib, it’s a conference (and community) focused on library technology. This was my first time attending and it was absolutely fantastic - I’m a lifer, if I can swing it. There were a ton of top-notch sessions, but here are four that really stuck out to me.
[Photos taken by me, showing the UCLA campus.]
Tumblr media
Maintaining a Kick A** Tech Team and Organization
Bonnie Gordon, Hillel Arnold, and Patrick Galligan, Rockefeller Archive Center
Presentation notes here.
The RAC team talked about their stated core values, which "express how we work together as a team, and define a standard against which we hold ourselves and each other accountable" during a time of a lot of institutional change. I really appreciated how they centred their values both in their work and in their interpersonal relationships. It gave me a lot to think about in the context of my own work.
Participatory User Experience Design with Underrepresented Populations: A Model for Disciplined Empathy
Celina Brownotter and Scott Young, Montana State University
Slides here. One of the most inspiring talks at the conference, Scott and Celina talked about engaging Native American students in UX work to give them a sense of agency within the library.
The Most Accessible Catalog Results Page Ever
Kate Deibel, University of Washington
Slides here. Kate built (and is continuing to work on) the most accessible catalog results page - that is, a catalogue results page that is accessible to users with a wide range of mobility, sensory, and cognitive disabilities.
Resistance is Fertile
Christina Harlow, Cornell University
Slides and notes here.
Christina laid out a "manualfesto" (she absolutely acknowledged how awful that portmanteau is) for how those of us who work in libraries can model our ethics in our work. Using the RiotGrrl manifesto and the Queercore movement as bases for inspiration, she talked about questioning and dismantling harmful power structures that govern our work, especially focusing on data and models (i.e. OCLC and Bibframe, because she's a metadata librarian). I found it really, really inspiring.
Tumblr media
The whole conference schedule can be found on the Code4Lib 2017 website, with slides and notes linked from each talk’s page (when available). It’s a really welcoming, inclusive conference. If you’ve never been to C4L and want to talk about the experience, hit me up on twitter at @archivalistic!
0 notes
saraallain · 8 years ago
Text
Update: Data Migration 101
In my last post, I asked archivists to send me their data sets - I’m happy that two kind folks stepped up and offer their institutions’ archival data for me to mess with. Huge thanks to both of them. This means that I’ll be kicking off this data migration project soon! Yay!
Keep an eye out for the first post in a couple of weeks. In the meantime, have a dancing unicorn.
Tumblr media
0 notes
saraallain · 8 years ago
Text
Please Send Me Your Data
Friends, Romans, archivists, lend me your datasets!
The first archivist project I ever worked on was a large data migration from CONTENTdm to Islandora. It took the better part of a year. It was, in word, hellish. But it was also interesting and complex and challenging, in a good way. I enjoyed it, and data migration is something that’s been interesting to me ever since. 
At my place of work, we offer data migration services to AtoM. It’s a great option for institutions that don’t have the available hours or technical expertise to manage their migrations in-house. Even here, where we do data migrations all the time, it can take dozens of hours. However, plenty of institutions don’t have the funding to hire someone to do data migrations for them - but that doesn’t mean you’re stuck! There are a lot of ways to evaluate your data and to clean it up for import into a new system that cost nothing but time. In keeping with the stated goal of this blog - to share my knowledge and the things I’ve learned - I’d like to write a series of posts on data migration.
The series would start with an analysis of a data-set. I’ll talk about external standards, internal processing guidelines, and data consistency - this will hopefully act as a good data entry planning guide, as well. Next, I’ll walk through normalizing data to a chosen standard and the tools that make this easier. Finally, I’ll talk migration - how, what, and why it can be so frustrating. Each of these posts will be a standalone, but together they’ll form a beginning-to-end guide.
This is where you come in. I need a data-set to work with. It would be absolutely wonderful if one of you, my lovely archivist pals, would be willing to share your data with me for this project.
The data-set must:
Be archival in nature.
Be retrieved from a system other than AtoM (or created external to a system, i.e. spreadsheets or EAD).
Be sufficiently complex - at least 5-10 top-level records with children. Approximately 500 archival description objects (collections/fonds, files, items, parts, whatever) is the goal here.
Contain descriptions as well as ancillary content like subject terms, authority records, places, genres, and/or accession information (I realize that this last one is difficult for privacy, so I’d be willing to anonymize). There are plenty of other types of archival data that I’m leaving off.
Be a little bit messy (this part is optional, but perfect data kind of defeats the point, right?)
Accompanying digital objects are not required, but are welcome.
You must be willing to have the content publicly available in raw form, or you must agree that I can anonymize it at a basic level (take out names, institutions, and other identifying information).
What I will do:
I’ll write up my analysis of your archival data on this blog. Free data analysis for you!
I’ll use it as an example of how to prepare internal processing guidelines.
I’ll use it as an example of how to normalize inconsistent data using free tools that any archivist can master.
I’ll use it as an example of how to concatenate/extrapolate data to adhere to a widely-accepted archival description standard (ISAD, DACS, RAD, etc.).
I’ll use it as an example of how to migrate your data into AtoM.
What I will not do:
Bash your institution/archivists/data entry folks. We’re all doing our best.
Comment negatively about your archival management system (if you prefer I do not mention it at all, that’s fine).
Complain.
If you’re willing to lend me your data for this project, I would be absolutely thrilled to receive it. You can contact me on twitter @archivalistic or via email at me [at] saraallain.com.
1 note · View note
saraallain · 8 years ago
Text
The Things I Use
A few years ago there was a trend where lots of people with blogs wrote about the tools that they use on a daily basis. I loved reading them and I found that they demystified my colleagues’ jobs - they weren’t magical beings with data-munging superpowers but instead just normal people grappling with the tools at their disposal. So I’m going to do the same, several years off trend.
My official title is Systems Archivist, but that’s misleading (as almost all job titles are). Sometimes I do things that a systems archivist would do, like munging data, but most of the time I help build software. On a given day, I might be doing the following:
Running quality assurance tests on new features
Doing all-around functional testing for new releases
Thinking through and writing up new development projects
Solving nitty-gritty problems for users of the software
Writing and updating documentation
Considering high-level future changes as we update our roadmaps
I did all of those things last week. And none of it would’ve been completed without the suite of tools that I rely on every day.
Journal
Tumblr media
If you follow me on twitter, you know that I’ve been making the switch from a haphazard and disorderly notebook to a bullet journal. This is how I keep track of the things I’m doing on a given day. It’s a miracle and I love it. If this is something that you are interested in, I point you at The Lazy Genius Collective’s blog post on the subject.
System
I’m a dedicated Mac user at home and in both of my previous jobs. When I started my current job, though, the laptop I was given had two boot options - Windows or Ubuntu Linux. So I learned to use Linux like a cool kid.
Tumblr media
My little laptop is a Lenovo Thinkpad X131e running Ubuntu 16.04 (Xenial) with an 11″ screen. It’s no longer sold but my guess is that it probably cost well under $1000 - it was designed for K-12 school use. Because the screen is small, I use externals for everything - an external display, a wireless mouse, and a USB keyboard. My laptop is a couple of years old at least but it does everything I need - though the first time I built an Archivematica VM it was bricked for half an hour because it doesn’t have a ton of RAM. Overall, it’s a good multipurpose laptop, and it works for the jobs that I ask it to do.
Text editor
I don’t need anything more complex than a text editor, and I prefer GitHub’s Atom. It does get kind of confusing since we also make software called AtoM (it stands for Access to Memory), but that’s life.
The vast majority of the coding that I do, if you can call it that, is writing documentation in a markup language called reStructuredText. We check the documentation out from GitHub and then commit the changes back to the repository, and I love that Atom (the text editor - see, it’s confusing!) shows me right in the editor where my changes are, via colour-coded edge markings. It’s also wonderfully configurable, with community-created themes and packages made available through the preferences.
I have used vim in my work (in fact, I used it today!) and it scares the bejeesus out of me, so don’t come at me with your opinions about it.
Vagrant (and docker!)
Often, when I am testing our software, I do it using a virtual machine - that is, another operating system environment running inside my laptop’s OS, completely contained, Inception-style.
Tumblr media
For example, if I am testing Archivematica, I create a Linux-based virtual machine inside my Linux laptop (Linuxes upon Linuxes!) that contain all of the programs and things that Archivematica needs to run. This makes me sound very technical, but the whole process is made extremely easy by something called vagrant. It’s basically a big wrapper that our developers use to package things up so that non-technical people like me can easily deploy a local, easy-to-update instance of Archivematica. I also did this in a previous job where I used Islandora quite a bit; the team over at the Islandora Foundation makes it even easier by providing .ova packages that can be opened in VirtualBox, eliminating the need to run commands to build your virtual machine.
We’ve just started creating docker containers for our software, so I’ve started learning how to use those, as well. Docker is kind of like vagrant, in that it’s a way to create a self-contained development environment. To be clear, I don’t create either vagrant or docker environments - I just run them, much like you might run any piece of software.
Git and GitHub
I explained git and GitHub at length in two blog posts: one on the basics and definitions, and a second on how they’re used in libraries. There’s some information on how I use them in those posts, so I won’t go into detail here except to say that I use git on a regular basis and we, as a company, rely on GitHub to hold all of our code. For me, knowledge of these tools is invaluable.
SFTP
When I need to access the scary back-end of a system like Archivematica - for example, to change files or add new ones - I usually do so through an SFTP client. I could do most things through the command line, but to be honest it would take so long for me to look up the commands, sort out permissions, and fix my own mistakes that it’s just not usually worth it. I like Filezilla for my SFTP needs - it has an easy drag-and-drop interface that makes a lot of sense to me.
OpenRefine
The last tool that I want to talk about is one that I don’t actually use that much anymore, but it’s so helpful that it would feel remiss not to mention it. OpenRefine is a free tool for cleaning up messy data, and it’s invaluable. One of my first jobs was to migrate thousands and thousands of rows of archival metadata from a non-standardized system to MODS; OpenRefine makes it really easy to standardize terms, concatenate fields, and perform all manner of manipulations on your data. If you work with data at all, ditch your spreadsheets and give OpenRefine a go - you will feel like a magician.
Tumblr media
That’s it!
Any questions about the above? Want me to go into more detail about any of these tools? Let me know on twitter, where I’m @archivalistic!
0 notes
saraallain · 8 years ago
Text
Git in the Library
In a previous post I covered some basic information about git, a version control system for software development, and GitHub, a free hosting service for managing code that is developed using git. That previous post is a good place to start if you’re looking for definitions and links to more information.
In this post, I’m going to try to answer some questions that I’ve heard from other people in the library and archives community. These are questions that I’ve asked myself as well. Note that this post and the previous one are intended for people who have limited knowledge of what git and GitHub are. This is for the non-technical or newly-technical among you. If you already use git in your work, you're not the target audience here.
Throughout this post I’m going to conflate git and GitHub. Not all git users are have their code on GitHub, but many do - those users are as good a representation of the git user base as you’re likely to find. 
Why is git so popular/why is everyone using it?
Tumblr media
Pulling up a GitHub organizations ranking shows that Google, Facebook, Twitter, Mozilla, Yahoo, Adobe, and dozens of other large corporations use GitHub (and therefore git) to manage code. There are plenty of other version control systems out there - check out this list on wikipedia - but git does a good enough job that it’s become the dominant tool. It’s like using MARC for catalogue records or modelling archival descriptions in EAD - git has become the industry standard for software development because it was widely adopted, and now everyone’s invested.
I should say, it’s popular for a reason! It’s lightweight, easy to install, quick to learn, and has a lot of features for managing your code based on highly specific local workflows. But that’s it. No secrets.
Why is git used in a library or archive tech?
Git is mostly used for version control in software development (there are other uses that I’ll mention below) and some libraries/archives are actively engaged in such development. This is probably more common if the institution has an open source focus. At my previous workplace, the library IT team uses git/Github to manage open source software like their AtoM installation as well as in-house things like the library website’s Wordpress theme. It’s free for the library to manage their software on GitHub, as long as their repositories are all publicly open (as opposed to having private repositories, which cost money). 
Another use for git/GitHub is documentation. Again, this is an area where git’s basis as a version control system comes in really handy - easy-to-update, versioned documentation is incredibly useful! We use it at Artefactual to write our Archivematica and AtoM documentation. In our AtoM docs repository, you can view the documentation as it existed for versions 1.4, 2.0, 2.1, 2.2, and for our next release, 2.3, by switching the branch. Clicking on the branch 2.3 commit history, you can see all the changes we’ve been making as we prepare for the next AtoM release. That repository is then pushed to the AtoM docs website where you can read our documentation - properly rendered from the markdown that we committed to GitHub.
But really, the reason that we use it in libraries and archives is the same as above - it’s ubiquitous. That’s not an exciting answer, but hopefully the two examples above shed some light on how it’s being used.
Should I use git?
Alright, now we get to the meat of this blog post. Git and GitHub are mentioned a lot in library and archives technology circles (hereafter called libtech) and it’s really easy to feel like they represent a required competency for anyone interested in libtech. But I’m here to tell you this: 
Tumblr media
You do not need to learn git.
I wish that this was Geocities and I could put a blink tag and some sparklers around that, because it’s important. Being a non-technical person who is interested in or wants to engage with the technological aspects of library work is intimidating. One of the leading causes of imposter syndrome - for me, at least - is feeling overwhelmed by the sheer number of competencies that seem to be expected. Programming languages, tools, workflows - we’ve all sat through presentations where the tech acronyms were thrown out at such a dizzying pace that we completely checked out and spent the whole 30 minutes on twitter. Let me demystify this one for you - if no one is actively asking you to use git, if it’s not relevant to your day-to-day work, you don’t need to learn it.
That said, if you’re going to be participating in software development or you want to move some documentation to a version control system, git might be a thing you want or need to learn! You may also just be interested in learning something new and I would never invalidate that! Git is a really useful tool to have in your toolbox, but remember that tools get rusty if you don’t take care of them. Likewise, git’s not a multi-purpose tool. It’s not relevant to the vast majority of the things that librarians and archivists, even techy ones, do in their day-to-day work. Please don’t think that it’s a requirement!
How can I learn git?
If you want to learn git, though, good news: it’s pretty easy and there are plenty of free learning modules online! You do need to be somewhat comfortable on the command line so sites like Codecademy or Code School can be useful (check out Codecademy’s Learn the Command Line module). For git basics, check out this 15-minute tutorial from GitHub. If online learning modules aren’t your thing, here’s a huge list of resources for teaching yourself. If you have a local organization that teaches coding topics, like Ladies Learning Code, you can check to see if they offer git (it might be hidden in a course called something like website deployment).
After learning the basics, it’s helpful to have a project to work on to keep your skills fresh. I started making conference presentations in reveal.js and committing the code to GitHub. I also use GitHub Pages to make nice URLs so that I can easily share presentations (like this one). And now that I use git in my day-to-day work, committing code is to a repository like second nature - though I’m still completely lost on the more advanced functions.
Tumblr media
Hopefully this post and the previous one have demystified git and GitHub a little bit. I can’t repeat enough that git is just one tool with a very specific use. It’s not something that even a technologically-inclined librarian or archivist needs to know, no matter how much you hear about it at conferences and from colleagues.
Have topic suggestions? Specific questions about library or archives technology that you want me to try to answer, or recommendations about how I can improve this content? Let me know on twitter: @archivalistic!
0 notes
saraallain · 9 years ago
Text
Git the Definitions
Git is one of those topics that we talk about a lot in the library tech world, but many library and archives workers - myself included - have either very limited or no experience using it. This is fine! Honestly, until I started working at an honest-to-god software company, I didn’t know very much about git either. Frankly, it’s not relevant to most library and archives work. But it’s something that comes up a lot, so - let’s talk about git!
In this post, I’ll be defining some basic terms and concepts. In future posts, I’ll talk more about the things that git actually does - terms like pull, push, merge, and branch will be covered as I talk more about how I use it and why.
The Basics
Git is a free, open source tool used to develop and manage software. On the git website, it’s called a distributed version control system - in plainspeak, it’s a way to take the source code for a piece of software, bundle it up, and manage changes to it over time. The edits are tracked so that if you mess up, you can revert your changes - this is its #1 selling feature. Installing git takes very little space or processing power, so git is built to encourage the people writing code to make lots and lots of small changes.
GitHub, on the other hand, is a hosting service that makes use of git. Source code can be managed in really sophisticated ways through GitHub’s web interface using the git protocol, as well as some fancy extra features. A personal account on GitHub is free, as long as all of your repositories are publicly accessible - to make private repositories, you need to pay. This is pretty cool because it promotes open development and, therefore, open source.
A git repository holds the source code - think of this as a directory containing all the files you need to build a website or a piece of software. On your computer, it would look something like this:
Tumblr media
Here we see my repository, intro-to-islandora-master, which is actually just a slidedeck built in reveal.js. It contains all of my site files (like index.html and the license), a folder full of css, and (selected) a folder of images, amongst many other things. Currently, it’s sitting on my little Mac laptop. This is a perfectly acceptable way to create my presentation files, but it relies on my laptop - what if my laptop dies the day before my big presentation? Or what if the presenter doesn’t have the right connector for my MacBook Air? Instead of relying solely on the files sitting on my laptop, I made this directory a git repository and connected it my GitHub account. The repository above, for example, is managed here: sallain/intro-to-islandora. You can see the same folders and files in that GitHub repository as you can see in the screenshot above. This means that I can work on my presentation from any computer. I only need internet access to pull down the latest version of my presentation and to, eventually, push my latest updates back to GitHub.
Users are the people who use the software. For example, the company I work at develops and maintains an open source product called Archivematica. If you use Archivematica in your library or archive, you’re a user. No git involved here, unless maybe you’re the person who maintains the software for your institution.
Contributors, however, are the people who edit and maintain with Archivematica’s source code on GitHub. They include the developers at the company I work for, but also anyone else who wants to propose a change to the source code. I contribute by writing documentation, which we also manage in GitHub.
Coming Up:
Why do people use git?
Why is git used/useful in a library or archives setting?
Should you use git?
Have more topic suggestions? Specific questions about git that you want me to try to answer, or recommendations about how I can improve this content? Let me know in the comments or on twitter: @archivalistic!
0 notes
saraallain · 9 years ago
Text
Explaining Digital Preservation with Cooking Metaphors
It’s been over a year since I’ve presented at a conference. There are a few reasons for this - namely, that I wasn’t previously employed doing the kind of work that I wanted to present on, so I just... didn’t. It was nice to take a year off the conference circuit, but now it’s great to be back on. As librarians and archivists, conference attendance and presentation is one of the major ways that we network and share our knowledge. It’s a really important way that we both learn and teach.
On Friday I gave a presentation at the Archives Association of British Columbia’s annual conference with two of my colleagues, Sarah and Dan. We decided that we wanted to try to craft an introduction to digital preservation that wasn’t boring. I really love teaching through the use of metaphor and narrative, so I loved Sarah’s suggestion that we relate the topic to preparing a meal.
Tumblr media
We wanted to focus on demystifying digital preservation and making it palatable (the food puns come fast and furious) and digestible (see?) for a varied crowd. We knew there would be some archivists in the crowd who were actively preserving digital content at their institutions; we also knew there’d be lone arrangers from small institutions and others who have done little to nothing with the digital content that they hold. It seemed best to aim this presentation squarely at the latter group. We figured that it was fine if the experienced folks checked out during our presentation; maybe they’d take away some tips on how to talk to others about digital preservation.
Our presentation was broadly divided into four areas: preparation, cooking, serving, and kitchen management. We talked about a wide range of topics - I’m a bit concerned that it was a whirlwind tour - including fixity, file format identification, metadata standardization, validation, normalization, redundancy, audit, access controls, and many other topics. We tried to write our slides so that they’d be useful to readers afterwards. Most relevant, we included a “shopping list” of dozens of tools and standards that it might be helpful for people to check out. If you’re interested, the presentation is here:
Your Digital Preservation Cookbook
If you don’t want to go through all of our slides, here are the three takeaways that I hope people got out of our talk:
Open systems and open standards mean that our data is more preservable for a longer period of time
There are things that every archive, no matter how small, can do to regularly audit the state of their digital content
Digital preservation works best when we work together to develop meaningful workflows and narratives
8 notes · View notes
saraallain · 9 years ago
Text
Learning on the Fly: Understanding the Forest Instead of the Trees
It’s been nearly three months since I moved to the Pacific Northwest from the gently rolling farmland of Southern Ontario. The first thing I noticed, that first day I had a chance to look up from the moving boxes, was how tall the trees are here - they tower over everything, as high as houses. I’ve always been interested in flora and I found myself inspecting each tree, trying to guess what kind it was, running my hand across the bark and pulling branches down to look at the needles. It was a way to interpret my surroundings, so different from what I’ve always known.
Tumblr media
Last week I found myself grappling with a similar problem. I had to describe a proposed feature development for Archivematica, the digital preservation tool that I work on. The goal was to introduce a new way to add rights metadata to the object, and my job was to describe the desired functionality and to present solutions for any potential barriers to this goal. This was my first dive into thinking not about what the system does, but what it could do, given the opportunity. And it was really scary.
The feature request had to do with several different broad topics: Archivematica’s import function, PREMIS rights metadata, the newly-introduced AIP re-ingest feature. I immediately started reading up on each of them, trying to learn as much specific information as I could so that I could move forward with the feature description. I downloaded the PREMIS rights spec for a bit of light reading (it’s 32 pages long). By the time Thursday morning rolled around, I was a wreck. I had no idea what I was supposed to be doing, much less how to do it or how the feature would work.
Turns out, it’s easier to remember which kinds of trees are in the forest if you think about the kind of forest first. The temperate rainforests of the Lower Mainland of British Columbia are full of Douglas firs and lodgepole pines, with the occasional western red cedar thrown in for good measure. Once I remembered that (with the help of my colleague Sarah, who didn’t bat an eyelash when I sent her a private Slack message that actually said “I’m lost”) it became a lot easier to understand the forest (read as: the feature) despite not knowing which tree was which.
Similarly, it’s okay to think about the big things in software development - the user stories, the workflows - without diving into the nitty-gritty. It’s my job to think about how the user uses the software. I’m not an expert on PREMIS, or how Archivematica handles imports - I’m not even an expert on Archivematica-as-a-tool, not yet. I have colleagues who are experts on these things. My job is to say to them, “I’d like Archivematica to work this way.” They’ll either figure out how to handle PREMIS rights in the way the feature asks, or they’ll come back and say it’s not possible.
This was a Learning Moment™ for me. In all jobs - and especially in new jobs - there is a pressure to Know All The Things. It’s scary to be new and to carry the burden of being naïve about the work you’re supposed to be doing. I’m certainly afraid of admitting when I don’t know something - afraid it reflects badly on me as an employee, as a librarian/archivist, and as a learner/human being. It doesn’t, though. What it truly reflects is that we’re developing a really complicated piece of software and no one person is an expert in every aspect of it. I certainly know more about PREMIS rights and CSV import now than I did a week ago. My job, right now, is the big picture. It’s my job to look at the forest. I’ll learn about each tree as I encounter it, and over time I’ll be able to point them out with ease.
0 notes
saraallain · 9 years ago
Text
Let me introduce myself
Hello!
Before I get into talking about what I’m learning and how, it might be beneficial to know where I’m coming from. It’s also helpful for me to reflect on where I’ve started and what I’ve learned up until this point in my career.
I’ve been a librarian and/or archivist since 2012. In graduate school, I focused on traditional archival things - like arrangement and appraisal - and spurned the digital. I did not take the courses that I now recommend to prospective MLIS students interested in digital archivy, like metadata and database management. Before I embarked on my MLIS, I did a BA in Classical History.
My first job in a library, while I was still a student, was in a digital scholarship unit. I was handed two unprocessed archival collections and a proprietary digital content platform and told to make the material available online. My co-workers and I had to learn, very quickly, about metadata standards and digital collections management. We learned enough to figure out that the proprietary digital content platform that we’d been handed was not very good.
This led to learning about repository systems, more metadata standards, digital preservation. We ended up moving from the proprietary system to an open source system, which meant that I learned about open source, the development cycle, even *more* metadata standards, rights management, digital asset management, and many more things. Over the past couple of years, I’ve added more and more bits of knowledge where I could by taking workshops, reading documentation and blog posts by my colleagues, and - most importantly - by being thrust into situations where I had to learn in order to do my job.
Despite the learning that I’ve mentioned above, when I ask people for help I often hear myself saying, “I don’t know anything!” This is true because it’s how I feel. I know very little about technology, digital preservation, archival and library theory, coding, development, systems administration - things I’m paid to know about. Instead, I’m learning as I go, picking up the bits that are relevant to my professional life and mostly letting the rest slide because it doesn’t impact my day-to-day experience. This is, I think, how most of us function.
I want to be open about what I’m learning and how I learn, and I want to teach what I can in turn. When I started out, I didn’t think I could ever work with digital archival materials. Through some lucky work experience, I was introduced to people who were gracious enough to point me in the right direction. I want to be able to do the same, even though I still have a lot of learning to do. Then again, don’t we all?
Finally, please feel free to get in touch with me. The most reliable way is via twitter, where I’m @archivalistic. You can also email me using the Ask button in the right-hand sidebar.
0 notes