It’s not just politics anymore
BY KEITH VINCENT
Fans of the “Mandalorian” were pleasantly surprised when a young Luke Skywalker was revealed on the final episode of Season Two. Older fans like myself had to look twice because I knew the representation of Skywalker in his prime was completely manufactured. But it looked so real.
Insurance and finance industries are encountering a sophisticated threat in the cybersecurity landscape that looks and sounds real as well. Like many things originally intended for good, artificial intelligence and deep learning has morphed into the proliferation of deep fake technology—an insidious problem for these industries.
According to the Wall Street Journal, a scam involving an audio call to a CEO of a U.K.-based energy company succeeded in extracting approximately $243,000 from the firm. The voice which was enabled by artificial intelligence sounded so real to the victim he believed he was speaking with his superior at the parent company. The man was directed to make an urgent transfer of funds to a supplier of the firm. Follow up calls made the victim suspicious, so he declined to send more funds but by that time it was too late to recover the initial transfer. According
to the story, the CEO reported that he “recognized his boss’ slight German accent and the melody of his voice on the phone.” Although this type of sophisticated cyberattack was predictable, it stood out as highly unusual at the time for its novelty and success.
Deepfakes are intentionally distorted video, images or audio recordings that portray something that is fictitious or false enabling malicious entities with a novel and sophisticated social engineering tool.
Deepfakes are intentionally distorted video, images or audio recordings that portray something that is fictitious or false enabling malicious entities with a novel and sophisticated social engineering tool. Technology innovations enable deepfakes to look and sound authentic and convincing, leading to abuse and misuse. Social engineering is the idea of leveraging human tendencies to produce the desired result; in this case, commit a cybercrime.
Carnegie researcher Jon Bateman identifies the type of attack highlighted above as deepfake voice phishing or simply vishing. Vishing leverages synthetic media to reproduce a trusted individual of the victim and highlights how deceptive artificial intelligence can be in the wrong hands.
Cybercriminals manipulate their victims, often by enticing them to click on a malicious file or hyperlink or divulge information they would otherwise protect. It is widely understood that social engineering is a favorite of cybercriminals because humans are often too trusting and easily manipulated under the right circumstances. The average consumer of social media will be familiar with deepfakes from an entertainment and social sharing perspective. Online searches are replete with interesting and useful good use cases for artificial intelligence.
For example, in May 2019 three Machine Learning Engineers at Dessa showcased a realistic artificial intelligence voice simulation of popular podcast host Joe Rogan. The demonstration is an outstanding example of how easily the lines between synthetic and real are blurred.
A cursory online search returns practical use case examples such as text to speech and video editing. It is both impressive and astounding how a small sampling of a person’s voice will create a realistic impersonation that can be manipulated by a keyboard. It only takes about 5 minutes at the current state of artificial intelligence. A website that generated Dr. Jordan Peterson’s image and voice had to be taken down after Dr. Peterson threatened legal action. Chinese tech firm Baidu claims it can produce a believable artificial voice with only 3.7 seconds of audio.
Deepfakes are considered a low risk to the stability of the global economy. Nevertheless, the risk of financial and reputation exposure to individuals and businesses are high. A recent study reports that personal banking and payment transfers are considered “most at risk of deepfake fraud, above social media, online dating and online shopping.” Financial institutions in general are obvious targets for cybercriminals due to their large amount of assets and customer data. The report outlines deepfake impact on the financial services industry. Areas of concern are onboarding processes, payment/transfer authorization, account hijacking, synthetic identities and impersonation among others.
Cybercriminals target individuals and groups with a variety of techniques. For example, manipulated audio might be used to steal identity information. Synthetic video might be leveraged against an individual putting them in a compromised position in order to extort payments from them. The Wall Street Journal scam outlined earlier is an example of payment fraud.
The same technologies can be leveraged to move businesses and markets as well. Synthetic voice, video, or texts can be used to defame a corporate leader, attack a brand, or spread false information about organizations leading to all kinds of negative outcomes.
Insurance brokers and financial services consultants need to prepare their workforce to meet this credible threat by updating their security program with the following objectives:
●Awareness of the good use cases of artificial intelligence, deep learning and deepfakes as well as their weaponization by malicious actors
● Process and procedure training to address critical functions such as onboarding, payment/transfer authorization, account monitoring, identification procedures, etc.
● Training on technology deployed to detect and eradicate deepfakes
● Cybersecurity awareness training to promote awareness and vigilance.
Workers should be trained to deal with ad-hoc urgent requests with a pre-defined protocol to authorize such requests perhaps requiring an approval chain to ensure authorization has the appropriate checks and balances.
Particular attention needs to be paid to safeguard brand reputation and the customer experience. When a breach occurs, the long-term effects of losing customer confidence and brand reputation can dwarf the short-term financial and systems damages. Insurance and financial services providers understand the trust consumers put in their products and the care taken to protect personal assets. Once that trust is gone it can rarely, if ever, be reclaimed.
Institutions that deploy effective training about deepfakes provide the heightened awareness, procedural discipline and hypervigilance to mitigate the risk of getting compromised by a deepfake scheme.
KEITH VINCENT is a cybersecurity consultant for Technologent focusing on security platforms and programs as well as software-defined and traditional networking. His information technology career started in 1999 as a network administrator working for EMC Corporation. After taking on more senior roles he earned his Cisco Certified Internetwork Expert R&S technical certification and has worked on many Fortune 500 accounts on large networking, security and data center projects.
Keith studied at the University of Redlands where he earned a Bachelor of Science in Business, going on to complete an Executive MBA at San Diego State University in 2012. Keith is currently attending SANS Technology Institute in the Graduate Certificate Program for Cybersecurity Engineering. For information, go to www.technologent.com.