This paper will discuss the development and project management of a software platform designed to address common challenges facing librarians, academics, and students who wish to text mine humanities data. While scholars can be described as experts in their respective fields, few also understand the array of digital humanities tools, projects, and methods, including the ability to write code and process texts using languages such as Python or R (Posner). Similarly, while university libraries often become the de facto hub of digital scholarship and digital humanities project work on a college campus, many institutions struggle to provide appropriate administrative and technical infrastructure to support sustainable research and teaching in digital humanities. Gale sought to address these issues through the Gale Digital Scholar Lab (DSLAB). The intent of the product was to provide a clear framework for gathering relevant data into a dataset, programmatically cleaning texts, and applying quantitative and qualitative analysis methodologies to the researcher’s collected primary source material.
In this paper, we will discuss how the Gale Digital Scholarship team learned change management through an examination of the DSLAB project. We will perform an agile development “ceremony” known as a retrospective or “[m]eeting at the end of an iteration to discuss changes in development process” when discussing evolving needs and feedback from users and organizational stakeholders (Dingsøyr et al. 70). In this retrospective, we will cover what went well, what did not go well, and how this understanding informs future development of the DSLAB. The following skills will be examined, as they were key to our success as a product development team:
Emotional intelligence – needed to manage strong personalities and competing goals within the company, as well as a range of team member experiences and opinions.
Communication and critical thinking skills – important for convincing leadership to explore this new avenue of learning support and adapting to changing circumstances.
Research – invaluable for ensuring a successful user experience, seizing every opportunity we could to engage with our users.
Time management – a necessity to balance implementing everything desired versus what was necessary for a successful product and to weather unexpected delays.
The DSLAB is the culmination of three and a half years of ideation, research, collaboration, and development. To get there, work took place across several cross-functional groups, including Gale’s Product, Software Development, and User Experience (UX) teams and two external consulting firms with expertise in data visualization and the digital humanities. The development of the DSLAB was phased by Alpha, Beta, and Production versions.
Alpha is the phase where “enough functionality has been reasonably completed to enable the first round of (end-to-end) system testing to commence” (Jenkins 11). It allowed the team to get a sense of the technical aspects of getting digitized primary source content in one place for the purpose of text and data mining.
Beta is the part of the build where “the bulk of functionality and the interface has been completed and remaining work is aimed at improving performance, eliminating defects and completing cosmetic work” (Jenkins 11). It allowed us to get a better sense of the design through conversations with users and customers.
Production is the phase in which a version of the product is made widely available to customers and users after rigorous design testing, development, and quality assurance testing.
Each phase had its own set of successes and challenges. However, ongoing collaboration and planning within our team allowed us to learn and grow as the development progressed. The integration of project management tools enabled us to coordinate key tasks to ensure the project’s success. In the following sections, we discuss how the project team managed the three build phases and employed multiple user experience research methodologies. We conclude with an examination of the team’s evolution and the DSLAB itself.
To better understand the challenges facing the development team, it is worthwhile to examine Gale’s evolution within the publishing space. Gale’s history as an academic publisher began in 1954 with the creation of directories to serve those seeking contact information for specialized advocacy and membership groups: a “Yellow Pages” of associations. Over the decades, Gale’s publishing program evolved to include reference works, periodical aggregation, microfilm production, and, in the mid-90s, digital archive publishing.
Large-scale entry into the world of archival publishing began in 2002 with the launch of the Eighteenth Century Collections Online (ECCO), which sought to digitize all books published in English or translated into English in the UK throughout the eighteenth century. This endeavour was undertaken to support scholars looking for online access to archival material. The Gale Primary Sources program has grown since then to include resources from the late fourteenth century to today, across dozens of disciplines. Delivery of this content was limited to search-and-retrieve with Boolean logic-driven interfaces, providing a close-reading apparatus to support research and instruction. While new graphical search tools have been introduced over time, the research experience has remained largely unchanged: search for content, create a list, read and annotate later.
As scholars and data scientists expressed increased interest in text mining and analysis, Gale began receiving requests for the provision of optical character recognition (OCR) text files to support larger-scale digital analysis of primary source archives. In 2014, Gale began providing Text and Data Mining (TDM) drives containing OCR text to customers of select Gale Digital Archive collections, including ECCO. As part of this program, a limited number of texts were also sent to the Text Creation Partnership (TCP) to support efforts to improve their OCR engine.
As the demand for these data files grew, Gale sought to provide a more streamlined, cloud-based approach to corpus building. In the process, an opportunity arose to provide tools that students and scholars could use to analyze custom corpora based on their research needs. Gale looked to integrate open source algorithms that could be cited, whose analysis could be recreated, and whose output could be leveraged outside of the platform for further analysis and publishing. The development of an integrated platform would address some of the most significant barriers to entry into this type of work, notably the feeling of being overwhelmed or confused when confronted with unfamiliar digital tools and processes that require a considerable investment of time to master (Bessette).
The response to this growing demand required a significant shift from creating static digital archives to providing a service that could leverage those archives and enable a new way to analyze them. This was a major challenge from the traditional publishing perspective and a broad departure from the search-and-retrieve databases for which Gale was known. The organization had to pivot to support the build and continued maintenance of a service-based resource. Strong project management skills were necessary to make this organizational shift so Gale could continue to serve the rapidly evolving needs of libraries and their users. In fact, the ability to adapt to change may be the most important project management skill to have, as our own understanding of these needs changed in the middle of executing the project. Even a small change, like a day’s shift in a timeline or a tweak to a design can impact the project with unforeseen consequences.
As discussed above, the Gale organization is known for its historical digital collections, such as ECCO (Gregg). With 20 years of experience in the library space, we have had the opportunity to learn about the evolving goals and challenges of academic research. In the past 15 years, we have received an increasing number of requests for the data captured (XML, metadata, and optical character recognition) during the digitization of primary source materials. Further investigation into these requests uncovered a clear need for data formatted for the purpose of text and data mining at scale. Given this information, the organization developed a research plan to fully understand the methods, technologies, and data used to conduct text and data mining. This research took the form of interviews with researchers, librarians, and faculty as well as a series of surveys to validate assumptions gleaned during the interview process. The goal of this research was to collect enough data to reveal market problems and patterns.
At the time, a former product manager was the leading the research and development of the DSLAB (known at the time as the DH Sandbox) with a third party software development firm named Exaptive. The goal of this engagement was to validate the market viability of a product that allows users to both curate a dataset and find the right analysis method in one place. During this engagement, the Product Manager connected with several potential users to assess the prototype. At the end of the Alpha phase, the product manager left the organization, and there was not time to communicate many of the findings from the Alpha phase to the rest of the organization. Given this change, the new project team had to determine a new path forward based on what we could intuit from the Alpha version. Through this transition, we learned the importance of establishing and communicating the process for housing documentation in a shared repository to ensure its longevity. To maintain our timelines, we extended our engagement with Exaptive and brought in third party design expertise to support the limited UX team resources being added to the team.
In addition to the software partner, Exaptive, a digital humanities consulting agency named the Agile Humanities Agency was brought in to support design. They describe themselves as “a software, design, and consulting collective” that provides collaborative design, software development, and project management services to higher education, galleries, museums, libraries, and digital publishers (Agile). This approach was taken because internal resources were unavailable at the time to support full-scale development. Work with the third party partners was intended as a replacement for the internal support from UX and technology at Gale.
During the Beta build, however, we realized that even with the third party support, our internal technology and UX groups would still need to participate, adding even more people to the team. Internal development support was needed to help Exaptive with many of the backend systems they had to interface with. Internal design support was needed because of a desire to maintain consistency in the experience across Gale’s many products. The complexity of a large, distributed team and pressure to deliver a minimum viable product (MVP) made it clear that a production build would work best being driven primarily by in-house resources. After the Beta phase, the team was reorganized to improve communication flows, and the appropriate resources from Gale were dedicated to the project (Figure 1). Ultimately, working with multiple third party partners and functional groups within our own company was too big a challenge to successfully manage, given competing priorities within each group.
Having all development-related teams under the same umbrella resulted in a tighter communication loop and the ability to balance project priorities. Developing the DSLAB using primarily internal Gale resources allowed us to better leverage the expertise, infrastructure, and resources of the company. Since this product concept was largely new to Gale’s technology team, it was important to have a shared understanding of priorities company-wide, especially with active projects besides the DSLAB being worked on in parallel. Bringing the majority of work in-house ultimately meant that other projects would not get the resources the DSLAB was now being given, but the DSLAB was given the focus it needed.
Research was a key factor in delivering a product that could meet Gale’s business goals and our users’ needs. Proper research and design support meant frequent interaction with our potential users so that we could refine the problem we wanted to solve and iterate on the designs we presented to users for feedback. Interactive sessions with users happened both in-person and remotely, as budgets and time allowed. The following sections describe our research process and the range of tools used.
At the beginning of the Beta phase, we developed “Personas.” Personas are fictional representations of users and needs that are created to bring life to the user experience research conducted for a project (Cooper 123–124). We created 24 personas that were representative of the full breadth of users we could imagine supporting with the DSLAB, and each persona represented one or two major needs. The details of each persona were based on what we knew of our users from the Alpha build and from past interactions with researchers that had used TDM drives in their work. We then prioritized the group into one primary persona, two secondary personas, and three tertiary personas (Figure 2). These six personas were selected and prioritized like this for two reasons. First, personas humanize our users to the team and provide a common understanding of their needs so that everyone knows whom the product is being built for. Without this shared definition, each person refers to a nebulous, undefined “user” that means something different to each team member. Second, we prioritize six personas because we recognize that products can and should serve multiple use cases. However, we only allow a single primary persona. When time pressures inevitably arise, we need to know which user/need to focus on. A common saying in design is, “if you try to design for everyone, it ends up working well for no one.” This process makes clear who we are and who we are not designing for and in what priority those needs are to be met.
Our stakeholders initially chose a primary persona that had experience with digital humanities. After our first round of design testing with potential users, however, the design team realized there was a much bigger opportunity and brought this to the attention of the product management team. Together, we decided to select a new primary persona, a user that was new to digital humanities but experienced in research. This new primary persona better reflected what we saw at the institutions we visited and the people we met, who were new to digital humanities techniques but excited to explore new ways of doing research. This example highlights how the communication of our research allowed us to adapt to what we were learning in the market and change the direction of the design to better support business goals.
Many design tools are now available, including major platforms like Figma, Sketch, Adobe XD, InVision, and Balsamiq. At Gale, we primarily use Axure RP to create our designs. We chose Axure because it provides the ability to quickly iterate design from low-fidelity to high-fidelity mockups, like the other platforms, and excels as a tool for adding complex prototype interactions and testing easily, with the ability to share designs internally and externally.
When we conduct user research, it is often in the form of observational interviews or “design assessments.” The prominent UX consultant Jakob Nielsen says, “[w]atch what people actually do” on his Nielsen Norman Group blog (“First”). Design assessments are our most effective tool for understanding user needs and behaviours as they allow us to watch what people actually do when conducting a text and data mining project.
During the Design Assessment, test participants are given a scenario and tasks to complete using either an interactive prototype or working software. As they work on the tasks, we observe their behaviour to assess how well our designs allow them to complete the given tasks. Assessing behaviour is an important distinction from assessing attitude. We know that people are often unable to accurately describe their own behaviour, so assessing it through real-world scenarios is an important tool for ensuring our designs meet user needs. We also use the “thinking aloud” technique, which asks the participant to narrate what they are doing and thinking. This helps capture both performance and preference while acting as a way to catch clues about misconceptions or confusion that may have otherwise gone unnoticed (Rubin and Chisnell 204).
Recruiting participants for interviews or design assessments can be difficult. In our case, we worked closely with the institutions we had already developed relationships with. Often, they were excited to take part. We also had to consider in-person versus remote testing. Factoring into this choice was how many people from an institution we would be able to speak with in-person while on campus and the cost of physically getting there. Sales and training events were separate from the user experience research, but we often took advantage of these events to schedule user research. This necessitated close coordination between the UX team and the product manager so that the user research sessions could go on around the other event.
While in-person research is always preferred, it is not always feasible. Using Zoom, an online meeting tool, we were able to replicate much of the in-person experience. To do so, we shared with our participant a link to the prototype and asked them to share their screen with us so that we could observe as they clicked around and used the prototype. We were still able to run the scenario and tasks and see their facial expressions and behaviour, but nonetheless, some of the ease of communication was lost in the remote setting.
In addition to formal design assessments with our Beta testers, the team developed a survey to learn about our users’ current workflow processes and pain points as related to their work in computational text analysis. Qualtrics survey software is used company-wide, and we were able to harness the rich capabilities of the software features to learn about users’ preferences (Kuniavsky 304). We deployed the survey after Beta testers had time to use and evaluate the DSLAB. A variety of users that aligned to our target user personas responded from nearly a dozen institutions. We posed questions related to features in the DSLAB, current functionality, the end-to-end workflow, and available analysis methods. By using a mixture of open-ended, rank order, and multiple-choice questions, we learned about user preferences and made significant changes to the analysis methods we ultimately included in our Production release of the DSLAB. We also learned about the top features our users wanted to see us integrate.
Having quantifiable data to inform our roadmap and priorities was essential. The larger scale of a survey versus in-person user interviews allowed us to gain greater insight into user preferences and informed the questions we took to in-person interviews and design assessments. We continue to use Qualtrics surveys to tease out the direction of improvements we make to the platform. For example, when we were ready to begin designing and implementing a custom content upload feature, we conducted a survey with questions specific to the type of functionality a user would expect. The data collected directly contributed to the decisions and planning for the production version of the DSLAB, including how we designed the feature. This gave us a step-up in the process so that we could test a design we already had some confidence in and focus on more nuanced aspects of that experience.
Once we made the decision to move the build in-house, a cross-functional team of developers, UX designers, subject matter experts, and product managers was assembled to determine the project scope. The requirements were determined by the Product team based on the qualitative feedback we received from users during the Beta phase of the DSLAB. With the requirements defined, the Product team gathered to prioritize its features and functionalities.
This prioritization was done using a story map. A story map is a method used by many product organizations to prioritize requirements ahead of conversations with software developers. In our case, the story map took the form of a series of virtual sticky notes which were hosted on the free application Note.ly. We chose this application so we could work collaboratively with remote subject matter experts. One of the setbacks we encountered with this tool was the download limitation. After we had built out our story map, we learned we could not export or download the story map to share with our Technology team. We attempted to figure out another way to share this information with this new limitation in mind. We tried creating a shareable link and passing it along to our software developers, but the number of requirements and subsequent sticky notes could not fit within the screen without scrolling to see the rest of the screen.
Following some investigation, we determined that it would be best to schedule a meeting with the leads from each cross-functional group to review the story map. We sought answers to the following questions about the stories:
What are the limitations of these stories?
Are there any known platform dependencies?
Are there known blockers to the project at this time?
These conversations allowed us to unpack the details of the work ahead as a team. It was critical that we had a shared and comprehensive understanding of the requirements for this project. We determined that a majority of the requirements were feasible given the platform we were building on, but that a few would require work beyond what the team was ready to tackle. Some features also needed to be redesigned in order to better fit the capabilities of the product’s software platform. In these instances, it was important to have the emotional intelligence necessary to not get upset about a feature being cut, but rather look for ways to provide enough functionality to meet user needs, even if the product would not be as comprehensive as initially designed.1
Once the requirements were finalized, the Product and UX stakeholders began writing detailed user stories to provide our software developers with the full scope of work necessary for each feature. We wrote our user stories in relation to the relevant persona in order “to ensure that the product requirements are connected with the user needs” (Zacharias et al.).This meant referring to “Ashley Rodriguez,” our primary persona, in stories with phrases like, “Ashley needs …,” “Ashley sees …,” and “Ashley is able to ….” All stories were written this way to centre our users in the development process with the aim of creating empathy and familiarity with their needs (see Appendix 1). A shared understanding of our users and their goals is critical to any successful project because it empowers all team members to make important decisions with our users in mind. Employing user stories in this way allowed the software developers to begin planning for implementation.2
In reviewing the user stories, we also determined that we would not be able to address all requirements in one round of development. Subsequent conversations between the project stakeholders involved negotiation and prioritization to establish a reasonable scope given the delivery date we were working toward. It is important to note that Gale’s technology team supports several library-oriented markets and can be working simultaneously on projects for the K–12 market, academic market, and public library market. This spread of active work can make it challenging to devote a large portion of the team to a project as large in scale as the DSLAB. Given the workload and number of developers available at the time, the Product and Technology team created a phased plan to build out the DSLAB (Figure 3).
The plan centred on two milestones: the American Library Association Annual conference (ALA Annual) and the beginning of the academic fall semester. Once these milestones were established, we could begin defining the minimum viable requirements for each month leading up to the delivery date. After coming to a consensus on the project plan, the Product and Technology stakeholders delivered it to leadership for approval. Once it was approved, the team set out to begin implementation.
In May 2018, the Product, UX, Technology, and third party design consultant agency set out to build the DSLAB. The purpose of this phase of development was to lay out its page structure and workflow. In this phase, we built the home, search, and document pages and their respective functionality, such as Google/Microsoft account login, redesigned search result page, and viewable document OCR text and metadata. It was during this phase that we leveraged existing platform features that made logical sense given the use case we were attempting to support.
We also built upon existing pages in the platform to better suit the needs of our users. An example of this are the changes made to the search results page. Many of the products on the Gale platform leverage the same search results information and layout, which contains features such as limiters, limited result metadata, and sort. Given our UX research, we decided to expand this by surfacing all metadata associated with a document as well as a snippet of OCR text. Making all relevant data available to users within search results would enable users to decide whether a document is pertinent to a content set without clicking into the full document view. This would support both close and distant reading, giving users the ability to easily add multiple documents to a content set or review each document on a case-by-case basis.
While the original design was informed by research, implementation inevitably meant unexpected technical issues requiring design changes. One example was the handling of existing naming conventions to describe the structure of the Gale Primary Source content collections that make up the DSLAB. Our research found that these naming conventions were confusing to users. We suggested changing “database” to “archive” and “module” to “collection” to better match the organization of the information and users’ familiarity with the terms. What was seemingly a simple change was a much larger effort than we had expected. This led to a series of conversations with project stakeholders, Gale Primary Source product managers, and our content metadata team to decide the best way to approach the suggested change. Ultimately, the update to the preferred terms had to be delayed given the effort involved and other priorities in order to meet our desired release date.
Throughout the Beta and Production builds, we utilized iterative testing with our working software and design prototypes. For each round, we would target feedback from about five participants, though this fluctuated depending on what could be scheduled with participating institutions. Five participants is generally accepted as the standard for a round of design because it is a big enough number to detect the most important usability issues while small enough to be cost-effective and prevent repetitive findings (Nielsen, “Why”).
After each round of testing, we collected notes from each session and created a report on the findings and potential solutions. A sample slide from one of these reports can be seen in Figure 4. With the findings and potential solutions, we then refined the prototype and beta software to prepare for the next round of testing.
After multiple rounds of design and implementation, we successfully launched our MVP candidate to Production on September 10, 2018. Even after launching the product, we continued to test and iterate the design. A sample of how this iterative approach looked can be seen in Figure 5 as a table displaying the feedback sessions per round of design.
The Digital Scholarship team saw great transformation in terms of in-house expertise hired to enrich the scope and quality of the DSLAB and its value to our customers. Along with transferring the development to our in-house team, there were ongoing efforts to heighten content and design with subject-matter experts and scholars within the field of Digital Humanities. As the product grew with each iteration, so did the team. Along with our Team Director, UX Designer, and Product Manager, we believed it was essential to develop a product that could resemble the workflow and alleviate the pain points experienced by scholars in the field. With that, two digital humanities scholars––Wendy Kurtz and Sarah Ketchley––were brought on board as Digital Humanities Specialists to focus the lens into the field of humanities research, specifically in text mining of historical documents and large corpora.
In addition to providing a richness to the product, our team members were able to own specific initiatives that would provide depth to the success of the product. Kurtz and Ketchley each serve as the point of contact for every user of the DSLAB. The tailored onboarding experience they offer ensures user receives ongoing support to teach and learn with DSLAB by providing a go-to scholar for professional development and training. Implementation and development efforts are further strengthened by Ketchley and Kurtz’s collaborative efforts to integrate incoming feedback into product road mapping, facilitated by Maggie Waligora, our Product Manager.
Since we were able to obtain direct feedback from our interactions with customers, it was relatively straightforward to delineate where the growth opportunities in the DSLAB should occur. It became apparent that adding an instructional layer to the DSLAB was of great importance to our users, which raised the question, how do we incorporate best practices for teaching and learning into the DSLAB? This led to another transformation in the team when we hired Lindsey Gervais, a learning technology scholar, as our Digital Learning Manager, with a remit to align Ketchley and Kurtz’s subject matter expertise within a learning framework, supported by appropriate instructional materials to open up the DSLAB to a wider audience of novice learners.
As scholars in their respective fields with experience in advanced research methodologies, teaching digital humanities, and instructional design expertise, Kurtz, Ketchley, and Gervais faced a common challenge of balancing best practices with scope, prioritization, and resourcing in their collaborations with the rest of the team. This was a healthy tension that the team alleviated with concrete project objectives, greater market research, and close-knit collaboration to validate the sequencing of project needs and their technical implementation in the DSLAB.3 In addition, recognizing that it was important to balance academic integrity and presence within the digital publishing and tools space, all three experts were encouraged to maintain a trajectory of academic growth, pursuing digital scholarship development opportunities for professional advancement and to remain current in the field.
Our venture into teaching and learning certainly came with changes (and challenges) not only to team dynamics but also to project management, scheduling, and prioritization. As we navigated through sample projects, videos, and a close collaboration with UX to incorporate a learning experience within the DSLAB, we added some key management tools and processes that allowed us to monitor progress, maintain continuity in communications across a bigger team, and ensure alignment between both the pedagogical mechanisms and the features and functionality of the DSLAB.
Building and maintaining a successful and efficient team can be challenging, more so when three members work remotely (in California, Washington, and Connecticut), while the remainder are based in an office in Detroit, Michigan. While we use a number of collaborative online platforms to streamline our planning and working practices, the foundation of our team’s success in building relationships with each other centres on regular written and verbal communication and a willingness to be amenable to the occasional early start for the West Coasters or after hours work for those in Eastern time zone.
We prioritize the development of emotional intelligence as a foundation for the working practices of our team of individuals, who are high achievers with vibrant personalities, against a backdrop of various stages of family life. We find these skills are also relevant when communicating with our customers across the US, and we strive to transfer our practices in empathy and patience while acknowledging multiple perspectives and prioritizing work–life balance.4
Initially, we used Trello to plan our weekly and longer-term tasks and engagements. Trello is free project management software that can be used to track progress against goals. We found that the platform was not able to offer us sufficient flexibility to incorporate fine-grained detail about each task or item, particularly in terms of storing all relevant data in one place. We switched to Airtable, another piece of project management software, as our primary collaborative tool. The success of our work with this platform begins with our hour-long weekly team meetings during which we walk through the board together, with each team member providing status updates for tasks assigned to them or debriefs on engagements that have taken place. We have several tabs for each group of activities, including ongoing work, conferences, user engagements, and team paid time off (PTO). These map to a calendar so we can keep track of who is doing what on a daily basis within the broader context of the team’s work goals.
We have various avenues for communication structured according to formality and expectations of a response. Email is the most traditional medium and is used for communication that does not necessarily require an instant answer and may require some work or thought. We use Slack (a messaging platform) extensively for asking questions of each other which we respond to swiftly and dynamically, and for sharing readings, interesting updates, and for building team camaraderie by sharing animal photos. memes, and amusing anecdotes. We also have a team Spotify (a streaming music platform) playlist to inspire our work. These latter engagements have been important for building connection, trust, and goodwill in our dispersed team. We schedule “in person” meetings via Zoom, which range from company-wide discussions to smaller monthly group Academic and Sales team meetings, to our regular weekly DSLAB team sync up. We also have one-on-one meetings with the DSLAB team director each week to discuss successes, problems, and plans. Having a regular cadence of meetings has proven to be essential in generating sustained forward movement and accountability for each member of the team.
The team generates a fair amount of written material ultimately intended for an internal audience (prototypes, persona write-ups, user stories) as well as an external audience (publication on the product web pages, marketing collateral, or conference presentations). We use the Google Workspace, primarily Docs and Sheets, for in-progress work which may include draft papers, instructions, product documentation, or roadmaps. This has proven a flexible method of team writing and editing that enables us to comment and suggest changes without a string of emails to keep track of. We also use OneDrive for longer-term storage of completed work, including our PowerPoint presentations. We can generate links to all this material to include in our Airtable cards for each task. Having documentation organized and accessible ensures a shared understanding amongst project stakeholders within the team and across the organization and It helps eliminate unknowns around the project.
While a public-facing release of a project is generally a desirable endpoint, digital projects cannot be simply released and forgotten. Work must continue, and the product must evolve to remain relevant. Since the Production release of the DSLAB in September 2018, we have released ten subsequent updates which have enhanced the features, functionality, and workflow in the platform. We continue to use what we have learned in project management to ensure future successes. Our iterative approach to research continues to inform our design and timelines for releasing features. Communication remains open and collaborative so that the team shares an understanding of what we want and where we want to go. We have continued to refine our project management practices since our very first efforts during the Alpha phase. We continue to evaluate the tools we use to track, plan, and communicate to best meet our team needs and ask ourselves if those tools are still serving our needs. Working in a dispersed, virtual environment has underscored the importance of regular “face-to-face” check ins via Zoom, not just for work, but also for team building activities and connection. We also recognize the importance of iterating, not just in the development of the DSLAB itself but also in our team relationships and working practices.
Ashley is working on a group project that requires her to collect key metadata for the documents within the group’s content set.
Currently, Ashley and her group must go through each document individually and catalogue each piece of metadata.
We should automate this process for Ashley’s group by aggregating key metadata fields from the documents within their content set for them in a structured way.
Ashley can download her content set’s metadata by clicking an inverse button to the right of the current button used to generate a download for the OCR texts.
The Content Set metadata button should say Download Metadata.
When Ashley clicks this button, a pop-up appears containing the following information:
Download Content Set Metadata in the header
Users may download a .csv file containing the metadata captured for each document within their content set, up to 10,000 documents. The following metadata fields are represented in this file:
Gale tries its best to capture as much metadata about the original document. However, there is a possibility that a small percentage of data has not been collected for certain metadata fields. In cases where we are unable to capture or provide data there will be a blank cell.
When clicked, downloads the .csv file to Ashley’s browser
The current download button text should be changed as follows:
Download changed to Download Content Set
Generating Download should remain the same
Download Ready changed to Content Set Download Ready
There should be a blank space for metadata fields that do not have data available. For example, we did not capture the author for a newspaper article.
When Ashley downloads her content set’s metadata, the filename should be as follows: ContentSetName_Metadata.csv
Adobe XD. https://www.adobe.com/products/xd.html. Accessed 12 April, 2021.
Agile Humanities Agency. https://agilehumanities.ca/. Accessed 7 April 2021.
Airtable. https://airtable.com/. Accessed 1 April 2020.
Axure RP. https://www.axure.com/. Accessed 13 April 2020.
Balsamiq. https://balsamiq.com/. 12 April 2021.
Besette, Lee. “Challenges in Digital Humanities.” Inside Higher Ed, 28 October , 2012. https://www.insidehighered.com/blogs/college-ready-writing/challenges-digital-humanities. Accessed 16 March 2021.
Cooper, Alan and Paul Saffo. 1999. The Inmates Are Running the Asylum. Macmillan Publishing Co., Inc., USA.
Dingsøyr, Torgeir, Nils Brede Moe, and Eva Amdahl Seim. “Coordinating Knowledge Work in Multiteam Programs: Findings From a Large-Scale Agile Development Program.” Project Management Journal, vol. 49, no. 6, Dec. 2018, pp. 64–77, doi.org/10.1177/8756972818798980.
Figma. https://www.figma.com/. Accessed 12 April 2021.
Gregg, Stephen H. Old Books and Digital Publishing: Eighteenth-Century Collections Online. Cambridge University Press, 2021.
InVision. https://www.invisionapp.com/. Accessed 12 April 2021.
Jenkins, Nick. A Software Testing Primer An Introduction to Software Testing. San Francisco, Creative Commons, 2008.
Kuniavsky, Mike. 2003. Observing the User Experience. Elsevier, USA.
Nielsen, Jakob. “First Rule of Usability? Don’t Listen to Users.” Nielsen Norman Group. www.nngroup.com,https://www.nngroup.com/articles/first-rule-of-usability-dont-listen-to-users/. Accessed 30 Apr. 2020.
Nielsen, Jakob. “Why You Only Need to Test with 5 Users.” Nielsen Norman Group. www.nngroup.com,https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/. Accessed 30 Apr. 2020.
Note.ly. http://note.ly/. Accessed 13 April 2020.
Posner, Miriam. “No Half Measures: Overcoming Common Challenges to doing Digital Humanities in the Library.” Journal of Library Administration, 53, 2013. doi.org/10.1080/01930826.2013.756694
Qualtrics XM. https://www.qualtrics.com/. Accessed 13 April 2020.
Sketch. https://www.sketch.com/. 12 April 2021.
Rubin, Jeffrey and Dana Chisnell. 2008. Handbook of Usability Testing. Wiley Publishing, Inc., USA. doi.org/10.16995/dscn.259. Accessed 16 March 2021.
Trello. https://trello.com/. Accessed 13 April 2020.
Zacharias, Isabela Cristina Simões, et al. “User Stories Method and Assistive Technology Product Development: A New Approach to Requirements Elicitation.” Proceedings of the Design Society: International Conference on Engineering Design, vol. 1, no. 1, 2019, pp. 3781–3790. doi:10.1017/dsi.2019.385.
Zoom. https://zoom.us/. Accessed 13 April 2020.