Design Theory: Time Requirements

For this project, I read chapter 14 of Designing With The Mind in Mind: Simple Guide to Understanding User Interface Design Guidelines by Jeff Johnson. This chapter is titled We Have Time Requirements, and it explains how we perceive time and what we expect of it.

My project focuses on the concept of visual and audio delay and how we perceive it.

Video Delay

These three videos are exactly the same, but some of them are synchronized and some are not. Can you tell which is which?

Video A is synchronized while videos B and C are unsynchronized. You don’t believe that video B is unsynchronized? Well, let me explain.

Our brains can only perceive a delay between audio and visuals when it is greater than 100 milliseconds (0.1 sec). Video B is off by less than that, so we cannot tell that there is a delay. This is helpful for animators because it gives them some breathing room when matching up audio to their animations; they don’t have to be perfect. (Johnson Designing With the Mind in Mind)

Another way to explain this concept is through physical distance. When you see a drummer playing in the distance, there is a delay between when you see them hit the drum and when you hear the sound from it. Once you are within 100ft of the drummer, you no longer notice a delay. (Johnson Designing With the Mind in Mind)

Design Process

When I started brainstorming ideas for how to represent this theory, I mostly thought about interactive elements where I could create an experience and ask my participants about their thoughts.

I contemplated making:
- videos
- loading bars
- pauses in conversations
- wait times

I decided to make videos because they can easily be played back and I can easily show that video B is in fact unsynchronized.

Video B: You can see that the blue bar is where the audio has been moved; the gray bar shows where the original audio was.
Video B: You can see that the blue bar is where the audio has been moved; the gray bar shows where the original audio was.
Video B: You can see that the blue bar is where the audio has been moved; the gray bar shows where the original audio was.

Once I settled on making videos, I had to figure out what I would be recording. It had to be something that viewers would be able to see when the sound was being created, so that they would know when the audio was off from the visuals. That is why I decided that a clap was best for my videos. Claps create a sudden sound, unlike an instrument or whistle that has a continual sound; these are harder to detect the cause of changes to the sound.


The results of my test were unsurprising: my participants correctly labeled videos A and C, but incorrectly labeled B as synchronized. They were slightly surprised that video B was not synchronized, but they suspected that I was asking them a trick question.

While their thoughts on my project were what I expected, I enjoyed hearing their real life example of this theory. My favorite example that they came up with was thunderstorms; we see the lightning several seconds before we hear the crack of thunder.


Overall, this project was fun and I learned a lot. I learned that there are multiple ways to solve a design problem, several of my brainstormed ideas would have been just as effective to represent this concept. I also learned that the people I am testing might notice things that I hadn’t, like the examples that they came up with that I hadn’t thought of. I learned a lot about my theory as well; I wouldn’t have given time delays much thought if it hadn’t been for this project. Deciding on a concept for this project and going through the process of creating it took some work, but I enjoyed it.

Interactive Design major with a Game Design concentration.