We use a combination of vision and audition every day to gather information and interact with the world around us. However, this combination is not yet reflected in web interfaces. Most UIs are extremely vision-oriented/graphic-driven.
In the past, because of my music background, I always advocated for the use of sounds on websites. A marriage of vision and audition could be a powerful tool for interaction with human-computer interfaces.
Rafa Absar and Catherine Guastavino, authors of the paper Usability of non-speech sounds in user interfaces (2008), note that:
"If all the information is presented visually, it may lead to visual overload and may also lead to some information being missed, if the eyes are focused elsewhere."
At the time, the conventional wisdom was that sounds should be used only in gaming applications. This perception came from a misunderstanding that users had their full attention on the desktop. They would hardly ever get distracted. Therefore, the use of sounds would be unnecessary and could even detract from the user’s experience.
There were also technical limitations that could make the whole experience poor - lack of browser compatibility with audio formats and slow connections (it took time to load audio files on most devices).
Fortunately, things changed. There is less guesswork in UI development, UX became a predominant field, and the number of studies around the use of sounds in UI began to grow.
Sounds are becoming part of product personality and emotion. It is difficult to imagine, for example, products like Slack and Skype without their unique sounds.
By the way, companies like Facebook and Apple have in-house teams dedicated to designing sounds for their products.
Will Littlejohn, director of sound design at Facebook, says that, contrary to the old vision, very simple sounds can dramatically affect how people feel about a product:
"Sound designers bring context to your world and use the sonic realm to do it. The sounds you hear while you experience other sensory inputs play a large part in how you interpret reality."
I like this idea of adding sounds to UI. It gives me the feeling that these days, building user interfaces is more and more like composing symphonies. Everything should be perfectly synchronized. When a visual animation finishes, a sound should be played, and so on. It’s like an orchestra - many little details (graphic, motion, and audio) can work in harmony to create a great digital experience.
That said, it's important to know how to integrate sounds in the interfaces. Google’s Material UI team created an excellent guide about how sounds can reinforce specific functionalities. It's worth reading.
Basically, there are three uses for sounds:
1) Sound as hero
These sounds are used to highlight a critical moment, like a celebration when the user clears the email inbox. This can enhance the experience. For example, for an upload that takes a long time, instead of relying only on a progress bar, a sound can be used to indicate that the upload is complete.
According to Littlejohn, this "allows people to move on to other things they'd rather be doing instead of watching the progress bar. It's the same cognitive shift you make when you use a timer while cooking dinner."
2) Sound as decoration
I think this is one of the most difficult ways to work with sound since it involves branding. Sounds used in this way should be carefully chosen because they create a unique voice for the product. They are used to highlight expressive or playful moments. For example, when you start an application, a sound may play to express the product's theme.
3) Sound as feedback
Also called earcons, these sounds are the most common. They are used to reinforce the meaning of an interaction and a product's emotion and personality. They are also used to call the user’s attention. For example, when you select an item in a list, a sound click is played to reinforce the action and create a bilateral dialogue between user and application.
I have noticed that sometimes these three categories overlap. Sounds should be used together with visual graphics. You shouldn't rely only on one or another. Sounds are a transient medium, and graphics are stationary.
However, sounds can become prominent in certain contexts (depending on environmental factors). For example, when we can't look at the screen, a sound notification is the only thing that can attract our attention.
Another important thing to keep in mind is the frequency of the event tied to the sound. It is best to consider how often the user will hear the sound in the application. It's essential not to overdo it, which could create an annoying experience.
UI/UX sounds are still a new and exciting topic if you want to focus on something different in the UX area. Like other subjects, the best way to learn this one is to observe how other developers implement it and, of course, get your hands dirty.
Here are some useful free libraries you can use to play with sounds:
I have also created a small example here. The code is available on my GitHub.
If you use a JavaScript framework, the process of adding a sound to a UI is simple. You import the audio file, declare a node based on the Audio Web API, and then attach it to an event, like a click on a button.
If you know of any implementations of sounds in UI worth checking out, please mention them in the comments, and don't forget to check my other posts about UX and UI engineering.
This post is part of a series about UX and UI engineering.
Photos by Parker Knight from Pexels, and Steve Harvey and Soundtrap on Unsplash.
Top comments (1)
This is great. I didn't know there is something known as Material Sound. 😲. Good one.