PROVO, Utah (ABC4) – Do you worry about your phone being hacked and your personal information compromised? Well, there is a new algorithm out of Brigham Young University, BYU, that might ease those concerns. 

Like any advanced technology, hackers are always up to the challenge of unlocking your phone in any way they can to gain access to your personal information. Because many people use their faces to unlock their phones, hackers may even attempt to do gain access to your device while you are sleeping or use a photo from social media to do the same.

BYU electrical and computer engineering professor, D.J. Lee, tells ABC4 that about two years ago, he was in a class with his students looking for a project to do. He says as they were brainstorming, they were looking at each other’s faces when the idea to use facial expressions as a security measure was born. 

Like every other human biometric identification system before it, fingerprints or retina scans can have security flaws. According to an article from BYU, Lee has decided there is a better and more secure way to use your face to keep your privacy. 

Concurrent Two-Factor Identity Verification, C2FIV, requires both one’s facial identity and a specific facial motion to gain access to a device. To set it up, a user records a short one to two second video of either a unique facial motion or a lip movement from reading a secret phrase, Lee shares. The video then inputs it into the device, which extracts facial features and the features of the facial motion, storing them for later ID verification.

“The biggest problem we can solve is to add a level of security to this verification process,” Lee shares with ABC4. 

He says it is easy for people to hack your phone through facial recognition technology. The new method of Concurrent Two-Factor Identity Verification can help solve these concerns by taking it one step further, looking at facial features and lip movements to recognizes specific facial movements based on the facial expressions of the user. 

“It has to have the movement,” Lee shares. He says it cannot be just an expression because the technology tracks the movement from beginning to end so it “must be a facial movement.” 

Lee says the identity verification process should be intentional. “If it is not intentional, it should not be unlocked,” Lee shares.  

So how does it work? According to BYU, C2FIV relies on an integrated neural network framework to learn facial features and actions concurrently. This framework models dynamic, sequential data like facial motions, where all the frames are of concurrent movement. 

“Using this integrated neural network framework, the user’s facial features and movements are embedded and stored on a server or in an embedded device and when they later attempt to gain access, the computer compares the newly generated embedding to the stored one. That user’s ID is verified if the new and stored embeddings match at a certain threshold,” the BYU article shares. 

In their preliminary study, Lee and Ph.D. student Zheng Sun recorded 8,000 video clips from 50 subjects making facial movements such as blinking, dropping their jaw, smiling, or raising their eyebrows as well as many random facial motions to train the neural network. 

Lee says they are confident the accuracy can be much higher with a larger dataset and improvements on the network.

Then they created a dataset of positive and negative pairs of facial motions and inputted higher scores for the positive pairs. “Currently, with the small dataset, the trained neural network verifies identities with over 90% accuracy,” the article shares. 

Lee has filed a patent on the technology already, BYU shares. 

He says he feels there are a lot of applications that can add this security to its system.”We want to do something more,” Lee shares. “We can use facial motion to increase the security.” 

“Your face, with a very special facial movement, that will be definitely hard to hack.”