A 3 hour design sprint outline

Outline of the sprint - 3 hrs total

  • Pre-prep - User research, define constraints, gather the tools, prepare space
  • (5 min) Read, understand problem
  • (5 min) Write notes / define constraints / define users
  • (5 min) Competitive research - see how other people have solved similar problems
  • (5 min) Define tasks - Write down key user tasks
  • (10 min) Mind map - dump all thoughts here, generate as many ideas as possible
  • (5 min) User story - create simple user story in the form of a flow chart
  • (5 min) Crazy eights - generate as many crazy ideas as possible
  • (5 min) Crazy eights - repeat. generate as many crazy ideas as possible
  • (15 min) Story board - create a more detailed user story
  • (~ 60 min) Pick solution & refine
    • create a full & final user storyboard with every interaction (a click, writes text, or does something)
    • create UI screens (prototype) of all steps in the final user storyboard
  • (~ 60 min) Design widget - pick a screen/widget and design a high fidelity design

Lost in the Crowd

Design solution for the "Lost in the Crowd" design exercise. 
By Chris Vallejos


This is a case study for how I solved the following design problem:
During a crisis, many people turn immediately to their mobile devices for assistance and information. One such situation occurs when parents lose track of a young child at a crowded theme park. Assume an application about that park would be installed on devices of a large number of guests and workers. Design a feature of that application that could help quickly reunite parents with their children, without requiring their children to wear or carry a device.


The Design Sprint & Process

A carefully planned 3 hour sprint

The process for this design unfolded in a carefully planned 3 hour design sprint. Earlier this year I took a Stanford D. School design workshop where I learned some of the design thinking methods pioneered by IDEO. I have since used versions of these methods at my last startup with great success, and I decided to use a modified version for this exercise. I also adapted some of the Google Ventures design process. You can see the sprint outline here.

Pre-prep

Before starting the sprint, I did a few things. 

I called a few people I knew had kids to do some super-informal user research, which was basically a conversation and a few questions. These interviews gave me useful insight into user needs and goals. I learned that privacy was important, that losing a kid is a moment of high distress and they want to feel safe and want it to be easy. And I learned that people would feel comfortable with cameras scanning the crowds and using facial recognition technology as long as its secure. 

I also called Peter Oh to discover if there were any constraints.

I then prepared an outline for the sprint, and I gathered all my supplies. 


The supplies


Start the clock, here we go!

The entire sprint was precisely timed using Insight Timer.

Understand the problem - 5 mins

Once I start the sprint, the first thing I do is understand the problem and clarify it. I do this by reading the question and asking myself some basic questions like how might I solve this problem, how has this been solved before, and what might users need etc... I also called Peter Oh before the sprint to uncover any constraints about the problem. 


Write notes, define constraints, define users - 5 mins

Once I understand the problem, I quickly write down anything that comes to mind. I also define the users for the app (park employees, guests and parents). At this point I am thinking about two possible solutions. One uses facial recognition technology. Another solution could work by sending alerts to park guests and having them visually look for the child. 


Competitive research - 5 mins

I then do some quick research to see if anyone is solving this problem already. This is very rapid, and it helps to get ideas going. 


Define tasks - 5 mins

I always define tasks early in any design process. These tasks inform all my interaction design decisions. Tasks also focus me on the user and their needs. 

User task list



Mind map - 10 mins

Mind mapping is my way to get as many ideas out of my head and onto paper as fast as possible. Anything goes here. 

Mind mapping



User story - 5 mins

I sketch up a quick user story. A user story tells the basic interactions a user will do to complete key tasks.

The user story that is developing at this point is an app that uses facial recognition technology. The theme park will have cameras setup throughout the park. A user starts by taking a picture of their child, and uploads it to the park servers. When a child is lost, the park can search for that child in the park and alert the parent when found. There is also a backup solution where if the child is not found within a minute or so using facial recognition technology, then nearby guests will be alerted about the missing child so they could join the search.  

A basic user story



Crazy eights x2 - 10 mins

Now that I have a rough idea of how my solution might work, I want to sketch as many UI concepts as fast as possible. I do this using crazy eights, which is a sheet of paper folded into 8 sections, and I sketch a UI in each section. I repeat this twice. 

Crazy eights UI concepts

More crazy eights UI concepts



Storyboard - 15 mins

Now that my concept is becoming more clear, I create a more detailed storyboard. Here I further develop the interaction design, and note any conflicts or issues. 


The storyboard is starting to come together



Pick a solution and refine it - 60 mins

For the next hour, I further develop key interactions and finalize a storyboard. I then create a UI sketch for each screen or interaction in my solution.

Detailed and final storyboard

Interaction details
One of the UI sketches


Design a widget - 60 mins

The final step was to design a widget. The most interesting screen is the map UI that park workers use to locate a child. This is the screen I decided to design. 

The map UI park workers use to locate a child 





The Solution

Facial recognition + crowd sourced searching

My solution is a "Find my Kid" feature that uses a combination of facial recognition technology and crowd sourced searching by park guests and workers. 
It works like this. A parent installs the app and registers their kid. When a parent loses a kid, they click "Find my Kid" in the Park app. The app sends a query to the park servers to begin a search for the missing kid using facial recognition technology. The park is equipped with cameras. This is complex technology, but Google could pull it off. And if pulled of well, it would seem like magic to guests. 
When searching, algorithms process inputs from park cameras, and when the kid is identified, the park employee nearest to the kid receives an alert on their mobile phone. The employee can use a map in the Park app to see the location of the kid and also they will see a photo of the kid (the parent will have uploaded this photo when registering the kid). The employee then finds the kid, calls the parent, and makes sure the parent and kid are reunited. 
If the child is not located within a minute or so using facial recognition technology, then what happens is park guests near the parent receive an alert about the missing child. They can then visually look for the child, and if they find the child, they can call the parent from the app. The more time that transpires after the missing child is reported, the wider the radius of guests is alerted. 

The Prototype


User first hears about the Park app 
When a visitor arrives at the theme park, a park employee tells them about the Park app. Visitors also see signage about the app and its great incentives. One of these features is a "Find my Kid" feature.  




User downloads the app 
The park visitor downloads and installs the Park App on their mobile phone. 




User opens the app
When the user opens the Park app, they land on the dashboard. User clicks Register Child. If user is not a parent, they can dismiss the Register Child button. 

NOTE: I am making an assumption here that the user already knows that clicking Register Child will allow them to enroll their kid in the Find your Kid program. They would have learned about this when entering the park through signage and park employees. 




User begins to register their child with a photo
To register a child, all a user has to do is take a full length photo of their child.  




User previews and confirms the photo 
Confirmation screen displays after user takes photo. User can accept photo or try again. I adapted the Instagram app UI patterns for this photo flow. 




User adds details to complete registration
Once user accepts the photo, they land on a screen where they must enter a name & age for the child. User must also enter their own name and add a phone number. User presses Finish to complete registration. 




Registration is complete
After child is registered user sees confirmation. 




When a parent loses their kid...
If user loses their kid, they can open the app and click "Find my Kid."




The search begins!
When user clicks "Find my Kid," they will see a reassuring confirmation that the park is searching for the kid. 

Then a park employee nearest the kid will receive an alert on their mobile device when the child is identified.

NOTE: User can also select when they lost their kid. When they lost their kid is important because this time will influence the radius in which guests and workers will get an alert for the missing child. 

NOTE: When child is found, user can click "I found my child."




Worker receives alert when facial recognition locates child
When the child is identified using facial recognition technology, the park worker nearest the child receives an alert on their phone. The park worker then uses the app to find their way to the lost child and call the parent when found. 

NOTE: For this solution, only workers can see a child's location on the map UI. This is because it may be confusing or stressful for a parent to use the map to locate a child. This is an assumption however, and it would need to be validated by testing. 




Worker calls parent when child is found
Park worker can call parent at anytime, especially when the worker finds the child.




When child is found...
When a park worker finds a child, they call the parent and the parent is reunited with the kid. The park worker then asks the parent to open the app and press "Reunited with child" button.




What if facial recognition fails?
If facial recognition fails to locate the child, then park guests and workers within a certain radius of the parent will be notified of the missing child. Park guests and workers will see this alert on their phone. They can then join the search for the child. 




Guest calls parent if they find the child
When a guest finds a child, they can open the app and call (or radio) the parent and reunite them with the kid. 



High Fidelity UI Design


Map UI for locating a child
  • This is the screen a worker sees in the Park app when looking for a missing child. When user is on this screen, they can swipe or pull this screen down to get back to the Park app dashboard.
  • I used the UI patterns of the Google Maps iOS app for this design
  • The blue dot is the location of the user (the park worker)
  • The red dot is the location of the missing child
  • The time on the red alert is how much time has elapsed since the child was reported missing
  • The picture can be tapped to make it larger
  • User can call the parent from this screen
  • User can see more info about the parent by pressing parent name



Conclusion

Although the design sprint took just about 3 hours, a not insignificant amount of time went into preparing for the sprint, documenting the process through photos and written copy, and presenting the solution.
Some general principles I kept in mind throughout the design process:
  • Focus on the user
  • Explore lots of ideas quickly
  • Use proven design processes