Registering for the Future of Time Clocks

Jordan Isbell
5 min readJan 5, 2021

--

My mom called me complaining about her company’s time clock. They used a clunky punch code system that was often frozen, causing her and her coworkers to clock in and out late. Her situation made me think.

We are in the year 2020! Literally living in the future and we’re still using clunky punch code systems?! No way.

Around the same time, SwipeClock (the company I work/design for) started working on the future of time clocks: facial recognition.

Goal

As the solo designer on the project (and at the company), I set to work with two Product Managers, one Developer, and one UI Architect. First I needed to design a facial recognition time clock experience of the initial registration into the system. I was given the following parameters from a PM:

1) The min score should be 65% facial match, so we need to label scores that are below 65% as something like “bad” for example
2) We’ll probably have 65–79% as a category and anything above 80 as a category. Feel free to create what those labels are called.
3) How many scan attempts and scores can the user do?
4) If they’re all under 65, how do we handle starting over or continuing to scan?
5) How do we show the user they’re being scanned and what the associated score is?
6) After we have at least one score above 65, what happens next?
7) How does the manager get back to the employee select screen when they’re done with that one employee?

With those parameters in mind, I set to work on the flow.

WTF?? (What’s the Flow)

Using old fashioned pen and paper to design the future of time clocks

Take 1:

An employees name would have been added to an external SwipeClock database previous to registration, so at this point, the user would first click through a drop down to select the employees name. They would then be given instructions on how to position their face towards the screen. The screen would indicate capturing the image and would show a score. It would save the picture if the score was over 65 or they would be required to reattempt if the score was under 65. If the score was 65 or higher, they would receive a verification of saving and be given a call to action to return home.

After receiving feedback from a PM, I moved onto the next iteration in the form of Take 2.

Take 2:

Here, an employee first selects their name from a drop down and clicks “Continue.” They are then shown instructions on how to look at the screen. Once they click “Continue” again, they ideally would place their face centered into the space. The screen shows a score, along with a graph below to show a “hotter/colder” visual so they know how close they are to the goal of a 65+ score. Once the user is successful, they would be shown that they were successful and be prompted to return to the home screen.

As soon as I put this together I had strong gut feeling…

Showing a score is a bad user experience.

On a traditional scale of 1 to 100, a 65 is typically a negative score. To avoid the negative connotation of a 65, I decided to remove the actual score from the interface in favor of a more pass/fail approach to capture their facial scan. The developers needed the score for internal use while a pass/fail approach created a better experience for the user.

Removed score and added progress bar

I added a progress bar to indicate to users how long to hold still while their picture is being taken. However, I soon learned it was almost instantaneous so the progress bar was not needed. Also, once the developer and I started playing with the hardware, we learned that the face indicator was too big and the hardware focused on the eyes. I changed the face placement box to outline just the user’s eyes.

End Result

After testing the eye placement indicator, we learned that the hardware in fact did not just focus on the eyes but attempted to capture the whole face. To allow for the highest success rate, I changed the face placement indicator to a box that is initially white with a black and white image, but once the user has captured a successful image, it turns green and the image shows color.

The instructions only show on the screen until the user is ready and clicks a button to proceed, but if they were to not show their face clearly, the instructions would reappear after 3 seconds. At that point, if the user were still unsuccessful, they would have the option to reset. Once successful, the user would see confirmation and click “Ok” to return back to the home screen.

Challenge

It was challenging working with the facial recognition software. Until we tried several designs on the software, we really didn’t understand exactly what it was capturing and what the timing felt like. Once we had played around with it several times, I was able to get a good grasp on it and design a facial recognition product that was both technically sound and had a positive flow.

I initially felt bound to a numerical score, but once I established that we keep it internal and not show it to the users, the experience changed drastically. It was fun exploring empathy in this project and understanding how my designs might make users feel.

Summary

Wow, this project challenged me in the best way! I made mistakes early on while figuring out the flow, acknowledged them gratefully and kept consistent communication with the Product Managers, and moved forward to design a product that has been awesome for our users. The registration process for using facial recognition while clocking in and out has been smooth and futuristic. Hopefully more companies are able to catch on (*cough cough* mom’s company)!

--

--