Skip to main content

Creating Sonic Immersiveness: Sound Generators and Digital Audio Workstations for Generating Sound and Music for Digital Stories

Published onSep 13, 2024
Creating Sonic Immersiveness: Sound Generators and Digital Audio Workstations for Generating Sound and Music for Digital Stories
·

The materials provided here include several resources presented during the 2023 Digital Humanities Summer Institute Aligned Conference on Open/Social/Digital Humanities Pedagogy, Training, and Mentorship. Aimed primarily at digital humanities educators, these resources explore digital storytelling and have a particular focus on the integration of music and sound within it. Although they focus on GarageBand, which can be used primarily when creating the associated music and sounds, other applications in this area work similarly and according to the same principles (see Supplement 3).

At the heart of this article is a video recording of the conference presentation (Figure 1) that explains the concept of digital storytelling and walks the viewer through the possible creation process, with a particular focus on the musical dimension, because music and sound are extremely important for creating immersiveness. The video serves as a source of ideas and examples for digital storytelling with a connection to cultural topics. It also shows examples curated by cultural institutions to illustrate the application of this concept. In addition, the video outlines possible scenarios for implementing digital storytelling in educational environments. A significant part of the video consists of explaining how a Digital Audio Workstation (DAW) works (using the DAW GarageBand as an example). Although DAWs are usually associated with the professional music industry, free or inexpensive applications make it possible to produce music outside of these boundaries. Demonstrations in the video illustrate how even people who do not have in-depth theoretical and methodological knowledge of music can use automated tools for music generation and as input for DAWs. A specific application for the automated generation of tone sequences is described in more detail, while additional tools for music and sound generation are also referenced in the supplementary materials (Supplement 1, 2, and 3). Beyond the realm of music, the video illustrates the integration of images and video elements to increase the overall appeal of digital stories. The video concludes by exploring how teachers can incorporate this technology into their classrooms.

In order to make the steps viewed in the video more accessible, a corresponding transcript of the video is provided.

With didactic considerations in mind, the materials continue to provide examples and use cases of digital storytelling with a music-centred focus. Supplement 2 presents an example created in a DAW, including a corresponding MP3 sound file.

Teachers who also want to gain a deeper insight into the possible didactic conception of teaching settings that incorporate this topic can look at a sample course plan (Supplement 2) for a course on the topic. This is divided into different phases with different levels of involvement on the part of the learners and provides links to further resources.

In addition, Supplement 3 lists additional and alternative applications that can be used (various generation methods and music analysis tools, also for different operating systems).

Overall, these resources are intended to provide an informative and inspiring reservoir for educators who wish to explore innovative ways to integrate digital storytelling, particularly with a focus on music and sound production processes, into their teaching.

Figure 1: Presentation for Open/Social/Digital Humanities Pedagogy, Training, and Mentorship 2023, Echo360, https://echo360.ca/media/239f0ba4-3bef-493e-8654-b5f6e18e569a/public.

Edited Video Transcript

Hello and welcome to the video: Designing Sound and Music for Digital Storytelling—Creation and Production with Digital and AI-Based Applications as Part of Project-Based Humanities Courses.

My name is Katrin and I’m from the Digital Humanities Department at the University Jena, Germany. Digital storytelling has emerged as an important topic in the curriculum for students of the DH.

But story creators face some hurdles due to copyright regulations, especially when it comes to the part of the sound of their stories because existing sounds cannot simply be accessed and reused in own media products.

AI-based and digital music production programs offer an opportunity to create sound by yourself and to use it individually—even if you are not a professional in the field of music or music production.

In the following I´m going to explain what digital storytelling is and I’m giving some examples. Furthermore, I point out possible topics for a teaching/learning scenario with reference to humanities. I’m going to explain what the Digital Audio Workstation (short: DAW) is and why you should use it and how you can use it. Finally, how to schedule this topic in the humanities course will be thematized. For this, also, additional material is provided.

What is Digital Storytelling? Digital storytelling is a form of storytelling that uses digital media and technology to create, share, and present stories. It's a creative and interactive way of telling stories and can take various forms, including video, audio, animation, interactive websites, podcasts, blogs, and social media.

Digital stories can be both fictional and non-fictional and presented in different genres. Digital storytelling is used in various fields including education, journalism, arts, and culture.

An important aspect of digital storytelling is the possibility of audience participation and engagement. Digital stories can often be interactive and engage the audience, whether through the ability to make decisions, influence the course of the story, or to interact with the story. This creates a special experience.

It therefore also provides opportunities to tell and share stories about cultural heritage, identity, traditions, and other aspects of culture.

Within the cultural context:

  • It [digital storytelling] can preserve cultural heritage;

  • It can foster cultural diversity and inclusion;

  • It can culturally educate;

  • It can be used for cultural marketing and tourism; and

  • [It can be used for cultural activism because] it enables activists to tell stories and spread their messages.

Digital storytelling therefore plays an important role in culture, transforming the way stories about cultural heritage are told, shared, and preserved.

Here are some Examples of Digital Storytelling in famous cultural institutions: The NMAAHC (National Museum of African American History and Culture) in Washington D.C. uses digital media, including video, audio, interactive displays, and other multimedia formats, to tell stories about African Americans and their contributions to American history and culture.

Figure 2: Example of African American story search from the NMAAHC.

These stories are presented in an engaging and immersive manner to engage the visitor in the narratives of the diverse aspects of African American history.

On their website you can explore many facets of digital stories. Here you can see some stories with different topics and different kinds of media. For example, an embedded video and links to more resources or full virtual tours to exhibitions that implement stories, images, videos, or timelines.

Another famous institution that uses digital Storytelling is the Rijksmuseum in Amsterdam. The Rijksmuseum was established in 1800. It houses an impressive collection of Dutch art and history. Currently it’s hosting an exhibition of Johannes Vermeer, who is known for his detailed and captivating depictions of everyday life in the 17th century. This exhibition can also be experienced virtually (Rijksmuseum).

Vermeer is one of the best-known Dutch masters of the Golden Age and is often referred to as the "master of light".

Here you can see famous paintings of him as the Girl with a Pearl Earring or The Milkmaid.


You can also experience a tour to the ancient Delft where he was born and lived.

Figure 3: “Start the Tour” Splash page for the Rijksmuseum’s online exhibit.

Here, in particular, one can see how music and sound can further enhance the digital story and make it more immersive.

Figure 4: Video from the museum.

You hear strings that convey a special atmosphere [music playing] or birds in the background creating an open and welcoming atmosphere [sound of running water and bird song].

Digital storytelling is not only relevant for working in a cultural institution, but also for working in organizations, companies, or as a freelance artist. The topic can also be linked to such relevance for DH courses and thus incorporated into teaching.

So how can you proceed if you want to integrate or convey the topic in your teaching?

Here you see two possible topics where storytelling can take place.

In the first students explore some of the existing examples of digital storytelling that relate to a cultural theme and use music.

Then the students think for themselves: they start designing their own virtual story on a topic. In the second example, the students should draft an own virtual exhibition experience. For that, virtual exhibitions or exhibits can be explored. Google Arts & Culture can be used for this. Furthermore, the students can use existing free sound files from platforms or create their own sounds that they think fit into the exhibition they have designed.

Digital storytelling is versatile. However, if you wish to focus a little more on the music component, it is important to note that not all participants in a course will have [experience] in this area.

Here digital tools can be used to create and edit [sound] without necessarily being a music producer. DAWs and AI-based chord progressors work well for this.

A DAW is a digital platform for recording, editing, and publishing audio and can consist of (proprietary) software and hardware elements (Rouse). Music or sound is either recorded directly into the program or created directly in the software with program-internal functions or plugins. Sound can be edited with virtual mixers and filters, and managed and shared with organizational tools (Rouse). Some of you may be familiar with the popular audio free software Audacity. But it does not meet the definition of a DAW in the narrower sense, because creating or transforming chords, keys, notes or playing and sampling virtual instruments, as well as accessing a sound library from which loops can be embedded, is not a function here, which is given.

DAWs focus on the aspect of composing, arranging, mixing and mastering music with and in a digital infrastructure.

In doing so, they digitize music production and make it accessible to non-professional musicians (Bell) and it becomes possible to use them outside of the music industry, for example in teaching contexts (Walzer).

However, no widespread use of DAWs in education is currently evident (Uyub). Studies examining the use of a DAW in teaching usually refer to commercial software (Stickland et al.). Concepts for implementing DAWs in a teaching-learning setting for students with an explicitly non-musical practical or theoretical background are not very common.

So how can a DAW be used if you don't come directly from the field of music theory or practice?

For this purpose, the focus is on the DAW GarageBand. Many of the DAWs on the market are purchase software [e.g., Ableton, Logic, Cubase] or require registration [e.g., Bandlab]. For educational use, this can be a hindering factor. Apple's GarageBand software is (mostly) pre-[installed] on the iPad, iPhone, and Mac. The advantage of GarageBand lies precisely in its intuitive usability, which also enables non-professional musicians or music theorists to use the program to learn how to produce and edit sound or music (Siemon; The GarageBand Guide).

[Now,] I’m going to introduce GarageBand on the Mac to you and how you can use so-called chord progressors for generating melodies that can be further edited in GarageBand.

When you click on the GarageBand icon, the program will open. Here you can see that you can create a new project and make settings under it (for example, regarding the tempo or the key). But we can adjust all of this later directly in the project too. If you already have project files, you can either open them directly by clicking on them, or open the last ones here. The suffix of the GarageBand files is .band.

When you open an empty project, this window appears by default, where you have to choose what format the first track in the project should have. You can choose between software instruments, audio tracks, or so-called drum machines. If you want to record your voice using a microphone or connect an electric guitar or bass via an audio-interface, for example, then choose the blue option. If you want to start with a beat, then the yellow one.

We'll start with the Software Instrument, which by default appears as the Keyboard when we select it. We go to “Create” [which creates] and we [arrive] at the project canvas. Here we can now continue working directly and explore a lot.

Now you can see that a music keyboard will automatically appear, which is belonging with the software instrument that we have selected. But we can also click away that first because you can open it again and again by going to Window and show musical typing. Show keyboard opens a simpler representation of the keyboard. In theory, you can start playing here.

Figure 5: Presenter demonstrates some notes of the C major chord on the digital keyboard.

But first, let's take a look at the menu itself.

Here in the middle is the main menu and the most important settings for the entire project. Here you can also edit the key or the tempo again.

Different views can also be set here. Next to it you will see various buttons like on a music player. In order to try them out, we first need sound material in the project. I'm now recording the notes of the C major chord I just played [metronome sound starts]. This is easy to do with the red record button [presenter plays notes of the C major chord].

We can now play it back or repeat it as often as we like by clicking on it here and dragging it into a so-called loop shape. If we want to listen to a section over and over again, for example in projects with several different parts, we can choose the Cycle option, which will then repeat the section highlighted in yellow over and over again [the metronome and chord notes play in a loop].

Let's look at the menu here on the left. Up here we still see the instrument library. This allows us to change the instruments in our project. Warning: this only works with MIDI files and not with audio tracks, which I will discuss later. For example, we can now change the recorded keyboard track to another keyboard instrument or say a string instrument [A string instrument and a metronome play in C major].

The question mark opens a help menu, which can be very informative, especially if you are new to the program.

Clicking on the controller opens the so-called Smart Controls. They can be used to change highs and lows as well as the sound of the respective track [metronome and notes].

With the scissors we can now make changes in our track. For example, I wasn't always quite in time when playing, which I can now manually adjust here in the so-called piano roll [music and metronome].

Speaking of beat: If I want the recorded music to be faster, I can change that afterwards. The same applies to the key.

Here on the right, we have another metronome that we can use as an aid, but you can also turn it off. The overall volume can be changed here [music plays without the metronome].

Here you can make notes and this symbol on the far right takes you to what is probably the most important function in GarageBand—the Sample Library of Loops.

You can help yourself from this library and simply browse through what you like. Listen to a bit of what goes with the track being played.

We'll take a beat and maybe this second piano here.

The key of the sample from the library automatically adapts to the key set above.

Nevertheless, from time to time there can of course be tones that don't quite fit.

And here we come to an important point—the difference between MIDIs and audio files—so-called .wavs.

While a WAV file mainly contains audio waveform data, which is the shape and form of how the magnitude of a vibration changes over time, the MIDI file consists of musical instructions to be interpreted. This also allows us to manipulate the MIDI files directly in the program.

For the WAV tracks, only the settings regarding the audio output can be changed, but not the individual sound properties themselves. You cannot change the instrument in the instrument library here either. But you could theoretically export the audio files and some services offer a conversion into MIDI. Often that doesn't sound very good.

A lot of things are possible with this. For example, if you want to create background noise for your digital story, which you can use to add a narration using a voice-over, you can also use the Toy Box sample pack here. Here we have some nice atmospherical sounds [nature sounds] or just sound from the outside [old fashioned car horn].

You can try it out a bit and you don't necessarily have to have theoretical musical knowledge here, so be creative and start!

If you still want to create your own more complex melodies faster, you can use AI-based applications for chord generation in addition to GarageBand. Sophisticated programs also exist here, but they are often expensive or, if used freely, prohibit further commercial processing. These are for example Aiva, Amper Music, or Soundful. These programs then also take over the entire music production.

But if you want to get a little active yourself, so-called chord progressors are very helpful.

Chord progressions are the basis of harmony in Western musical tradition and the basis of popular styles of music or traditional music and genres such as blues and jazz. In these genres, chord progressions are the defining feature on which melody and rhythm are built.

These are often learned in music theory classes because they build on scales and harmonies.

These theoretical aspects are already programmed into chord progressors and thus flow into your own music creation.

Let's look at [OneMotion], which is an example of a free [online chord progressor].

Figure 6: The OneMotion Web Tool

Here you can individually set the desired key, select the instrument, and determine the volume or the style of the tone sequence. If you later want to export it as a file and then import it into GarageBand, it is important that the same key and tempo settings are available here. So, let's see what sounds good [presenter plays keyboard—arrives at a C Em Am G progression]. This sounds good, so let’s take it!

The tone sequences can be dragged and dropped down here. Entire sequences can be created in this way. And these can then also be exported here as MIDI or WAV.

In the downloads you will find then the respective melody: the tracks can now simply be dragged into the GarageBand project, so finally, we are going to try this with another project.

I have created some content for my own digital story and now want to add some music and sound. I’m going to take this guitar because I think it fits very well. And now I want to add some chords and notes and I will use the chord progressor. Make sure to set the same settings in key and beats per minute.

[Presenter demonstration without commentary]

So, this is [what my final track sounds like]: [guitar playing with drumbeats].

And this is my final story, and this is what it sounds and looks like:

Figure 7: Presenter’s Digital Story Example

[A video plays with a voice over and the same music as the previous clip plays in the background. The voiceover says: “They say ‘Not all those who wander are lost’, but have you ever thought about going on a trip, out of your comfort zone? To embark on a journey, discover new cultures, make friends all over the world, and enjoy the beauty of nature? This is the story of my road trip. And what’s yours?”].

Finally, you might be wondering how to fit all of this into a storytelling course. I have created a [course] schedule for this, which you can find in the related materials. So, feel free to try it and let me know if you have any questions or comments.

Have fun trying it out and thank you very much.

Related Materials

Supplement 1: Example from Literature, Arts, and History

Many themes can be edited with GarageBand. I have created a sound that could fit the respective atmosphere of the example below. Tasks to create music for a specific mood of a historical event, work of art or artifact, can also be worked on with students in order to get into the topic.

Storming Of The Bastille

The historical event of the storming of the Bastille, which triggered the French Revolution of 1789, is the subject of many works. In addition to historical sources or explanatory videos, artistic works can also be used to frame events. Delacroix's painting from the Louvre, which shows the storming of the Bastille, serves as a visual template:

Figure 8: Delacroix, Eugène, and France. Le 28 Juillet 1830. La Liberté Guidant Le Peuple. huile sur toile, 1830. Louvre, Musée du Louvre, https://collections.louvre.fr/ark:/53355/cl010065872.

For this purpose, an epic sound should be created that conveys the mood in this picture.

Figure 9: Listen to my sound “Storming of the Bastille”.

Supplement 2: Course Schedule

The course schedule includes nine phases or topics, each with related material (learning materials and resources as well as additional input.)

Phase 1: Introduction to the topic

Description

Theoretical input (a scheduled or recorded presentation): An introduction to digital storytelling and the prospect of students being able to create their own digital stories (including sound and music).

Informative material for the introduction to digital storytelling can be provided additionally.

Materials

  • Theoretical slides or videos

Phase 2: The basics: digital storytelling

Description

Theoretical input (a scheduled or recorded presentation): The essential elements of a digital history is explained. Students learn how to create a digital history. The material can be presentations or videos to experience the different elements. 

Materials

  • Theoretical slides or videos

Phase 3: Focus: music production for the story

Description

Theoretical input (a scheduled or recorded presentation). Now it's about putting the focus a little more on the music as an elementary part of the stories.

Present selected variants of how to generate sound (the students practice their own programming, chord progressors, AI tools, …).

Materials

Phase 4: Try to generate and produce own sound or music

Description

Self-learning and practising unit: Let the students try out their own sound generation (e.g., in R Studio, OneMotion, GarageBand, …).

Phase 5: Discussion round

Description

Reflection on what has been learned and practised: Experiences from the self-learning unit can now be discussed. If necessary, help or solutions for problems that have arisen can also be shown.

Phase 6: Try to edit produced sound in the DAW GarageBand

Description

Self-learning and practice. Students work directly with the DAW GarageBand. This requires prior communication of the technical applicability and functionality. The program can be demonstrated in its basic features. Learning materials such as videos, presentations or virtual worksheets can provide information that can also be used to understand the program.

In addition, students try out the program and can collect problems they find for a discussion.

Materials

  • Introductory Guides to Basic GarageBand Features:

Phase 7: Discussion Round: Reflection on what has been learned and practised

Description

Experiences from the self-learning unit can now be discussed. If necessary, help or solutions for problems that have arisen can also be shown.

Phase 8: Final Task: Work phase (in groups)

Description

Plan for about two to four weeks (depending on the course or program).

The final assignment: Telling an auditory appealing story with cultural links as a group. During the work phase offer time (virtual or in person) for questions.

Materials

To show students how to integrate the sounds into a potential digital story, the video tutorial provided in this article can be used.

If you don't schedule synchronous meetings, always get in touch with your students and ask about the status.

You should also specifically formulate the requirements for the final digital product and what the students should present in the final showcase.

A kind of checklist can be prepared for this purpose.

Phase 9: Final presentation: Presentation/ showcase of the group projects

Description

Presentation and discussion rounds:

  • The students should not only present and explain the digital product to the audience. They should also reflect on how the group work process went. Problems can also be pointed out here, so that all groups can learn from each other in perspective.

  • If some groups want it, it could be considered to make the created products visible on a website.

Materials

  • If you have further interest, the topic can also be expanded, from digital stories to digital events. They allow sound and music productions to be used on an even larger scale (see, for example: Lautamäki & Tikkaoja).

Supplement 3: DAW Supplement: Alternative Applications and Processes for Automated and Semi-Automated Music Generation

Automated or digital music generation combines music and Artificial Intelligence (AI) and focuses, among other things, on music generation methods. It is a branch of digital musicology, which in turn can be understood as a component of Digital Humanities.

The automatic and AI-supported generation of music with a wide range of functions is offered by proprietary providers such as Aiva, Amper Music, beathoven.ai, Riffusion, Ecrett, or Soundful. Nonetheless, their functionalities are differentiated or have limitations (e.g., a limited number of downloads, different costs for downloads or subscription, license-free but fewer functionalities).

One technology from the music industry that supports music generation and editing is the Digital Audio Workstation (DAW). They are digital platforms used in the professional music industry for the composing, recording, arranging, producing, mixing, and mastering of sound (Walzer; Rouse). They enable users to play and/or record music directly in the software with program-internal functions or plugins (e.g., virtual instruments, loops or via an interface).

Although it is possible to generate tone sequences using functions such as arpeggiators in the DAWs, this cannot necessarily be said to be automated music generation. The program still requires the user to enter notes or melodies and thus have at least an idea or knowledge of how tones and sounds can be created or combined.

But a key advantage of the DAW is the ability to mix and master pieces of music, which is why it is so well suited for creating publication-worthy musical elements as part of digital cultural productions. Additionally, freely usable, or inexpensive licenses for DAWs indicate a democratization of and accessibility to music production to non-professional musicians (Bell), which can extend to and thus in an educational setting (e.g., Walzer, Stickland et al., Pendergast). Nonetheless, a widespread use of DAWs in education settings is currently not evident (Uyub).

Programming scores

In addition to using existing applications to generate sound, you can also program tone sequences yourself or use models that generate music based on machine learning processes.

There are different options for this. In Python, for example, you can use the Music21, MIDIUtil or Magenta libraries. For R, the tuneR or seewave libraries can be used for this. They all offer models that can be used to generate tone sequences.

Tone sequences can also be programmed as manual instructions, which is particularly useful if you have knowledge of tones and scales. For RStudio the gm (generate music) package provides a simple and intuitive high-level language to compose algorithmically without users having to consider details about music notation and without having to pay for licenses. Because the gm package is well documented, operations for students to generate music can be taken easily from the guide (Mao). Basic functions within the generation work pipeline are: initializing an empty music object, adding components, and printing or converting scores or audio files.

Within these infrastructures, line objects are used to represent musical lines that in turn contain pitches, durations, tuplets, ties, breaks or offsets. Multiple line objects are suitable for sheet music and complex scores.

In order to start here on a slightly more basic level within the framework of teaching-learning settings, MIDI tables, which contain information about tones, can be found. Here students can select the corresponding numbers for the associated grades and insert them into the program code (e.g., Scientific Pitch Notation: Table of Note Frequences). After a few coding exercises and transformations of the music, the created objects can be saved, exported, and subsequently processed further in the DAW.

Optional: Analyzing scores and waves digitally

The analysis of music or sheet music is an important area of musicology. With notation programs, extensive pieces of music can be analyzed specifically and in an appropriate environment (polyphony, playing style and style, for example). Musescore is one of the standard notation programs that allows users to carry out these operations. Alternatives like TuxGuitar, Rosegarden or Sibelius have more or less similar functionalities. In addition, sound-specific analyses can be carried out with tools like Sonic Visualizer.

The step of analyzing is especially suited for a target group that has more experience with music and sound. Nevertheless, a look at such programs may also provide information for non-professional musicians.

Editing: Music and Sound work and finalizing in the DAW

DAWs almost always work with different tracks within a file. When you open a DAW, the interface appears and functions such as creating or opening a project and setting parameters (e.g., tempo or key) can be carried out.

Audio or instrument loops can be selected from integrated music libraries, and voices or instruments can be recorded directly and further edited. If you start with a software instrument, the respective tones can be edited with the piano roll and instruments and sounds can be changed even at an advanced stage of the working process.

Since it is also possible to import existing sound files into a DAW, you can continue working with the previously created audio file. This can be further processed into a complex piece of music with different tracks.

The advantage of a DAW lies in the sophisticated environment in which one can carry out all the steps necessary to produce one’s own music and sounds, including mixing and mastering. Music can therefore be edited directly in the DAW for the final publication on a platform. Each platform has its requirements (e.g., a certain volume or loudness level for music or videos with music) and is therefore highly professionalized and shaped by the music industry. For this reason, no general conclusions can be drawn from this, but it is good to know that there are different requirements for publishing music and for publishing multimedia products with an audio channel. A first overview is provided by Ian Stewart in “How to Master for Streaming Platforms.”

Bringing everything together in a content creation program

When creating multimedia content that contains music as well as images and text, it is also important to consider whether additional audio tracks need to be inserted. There may be speaking voices that provide descriptions or explanations regarding the content presented. In addition to the music or sound, these must also be incorporated into the final object.

Multimedia and multi-track content creation tools are suitable for combining music, images, audiovisual media, and text content. The applications on the market that can be used relatively easily and with a low threshold (e.g., with drag and drop options of images and sounds) include Canva, Prezi or Vistacreate. This topic is also important for cultural organizations and institutions and their appearances on social media platforms, as it allows royalty-free sounds to be generated for postings (Gandhi).

Using the functionality these applications offer undoubtedly requires a certain amount of training. For this reason, the setting described here is particularly suitable for project-oriented learning, in which learners can contribute different skills within groups.

Works cited

Bell, Adam Patrick. Dawn of the DAW: The Studio as Musical Instrument. Oxford University Press, 2018.

DNB. “Oral Traditions of Storytelling and Cultural Transmission: Singers, Rhapsodists, Troubadours.” Deutsche Nationalbibliothek (German National Library), virtual exhibition, n.d., https://mediengeschichte.dnb.de/DBSMZBN/Content/EN/SoundsSymbolsScript/01-saengertradition-en.html. Accessed 1 Mar. 2024.

Gandhi, Medhavi. “Social Media Handbook: for Cultural Institutions & Professionals.” Embrace Digital, the Heritage Lab, 2020, https://www.theheritagelab.in/social-media-handbook-culturalprofessionals/. Accessed 1 Mar. 2024.

Lautamäki, Satu and Oona Tikkaoja, editors. “Planning and Creating Virtual Events: Experiences, Economics and Technical Solutions”. Humak University of Applied Sciences Publications, 2022, https://www.humak.fi/en/publications/planning-and-creating-virtual-events/. Accessed 1 Mar. 2024.

Mao, Renfei. gm: Generate Music Easily and Show Them Anywhere. GitHub, 2022, https://flujoo.github.io/gm/. Accessed 1 March 2024.

Pendergast, Seth. “Creative Music Making with Digital Audio Workstations”. Music Educators Journal, vol. 108, n. 2, 2022, pp. 44 –56.

Rijksmuseum. “Stories.” Rijksmuseum, 2023, https://www.rijksmuseum.nl/en/stories/. Accessed 1 Mar. 2024.

Rouse, Margaret. “Digital Audio Workstation”. Techopedia, 2022, https://www.techopedia.com/definition/6774/digital-audio-workstation-daw.

Siemon, Andrew. “GarageBand: iOS versus macOS. (All You Need to Know)”. Producer Society, https://producersociety.com/difference-between-mac-os-ios/. Accessed 7 March 2024.

Stewart, Ian. “How to Master for Streaming Platforms: Normalization, LUFS, and Loudness”. Izotope, https://www.izotope.com/en/learn/mastering-for-streaming-platforms.html. Accessed 1 Mar. 2024.

Stickland, Scott, et al. “A Framework for Real-Time Online Collaboration in Music Production Conference”. Australasian Computer Music Conference, 2018, Perth, Australia. https://novaprd-lb.newcastle.edu.au/vital/access/%20/manager/Repository/uon:42420.

The GarageBand Guide. “Playlists”. @TheGarageBandGuide, YouTube, https://www.youtube.com/c/TheGaragebandGuide/playlists . Accessed 7 March 2024.

The Smithsonian Museum. “Online Exhibitions”. https://www.si.edu/exhibitions/online. Accessed 1 Mar. 2024.

Uyub, Aiman Ikram. “Digital Audio Workstation (DAW) as a Platform of Creative Musical Performance Experience”. Kupas Seni, vol. 10, special issue, 2022, pp. 52 –55. https://202.45.132.61/index.php/JSPS/article/view/6583/3449, Accessed 1 Mar. 2024.

Walzer, Daniel A. “Blurred lines: Practical and Theoretical Implications of a DAW-Based Pedagogy”. Journal of Music, Technology & Education, vol. 13, n.1, 2020, pp. 79 –94. https://doi.org/10.1386/jmte_00017_1. Accessed 1 Mar. 2024.

Comments
0
comment
No comments here
Why not start the discussion?