Image: VectorMine / Adobe Stock

In the midst of unprecedented volumes of e-commerce since 2020, the number of digital payments made every day across the planet has increased – reaching around $ 6.6 trillion worth last year, a 40% jump in two years. With all this money going across the world’s payment rails, there are even more reasons for cybercriminals to innovate ways to catch them.

To ensure payment security today, sophisticated gaming theory skills are needed to overcome and surpass highly sophisticated criminal networks that are on track to steal up to $ 10.5 trillion in loot through cybersecurity damage, according to soon Report by Argus Research. Payment processors around the world are constantly playing against fraudsters and improving “their game” to protect customers’ money. The goal is constantly moving, and fraudsters are becoming more complex. Anticipating fraud means that companies must continue to change security models and techniques, and there is never an end.

SEE: Password Breakthrough: Why Pop Culture and Passwords Don’t Mix (Free PDF) (TechRepublic)

The truth remains: there is no surefire way to reduce fraud to zero other than to stop online business altogether. Nevertheless, the key to reducing fraud lies in maintaining a careful balance between enforcing smart business rules, complementing them with machine learning, defining and refining data models, and hiring intellectually curious staff who constantly question the effectiveness of current security measures.

An era of deep forgeries is coming

As new, powerful computer-based methods are developed and iterated on the basis of more modern tools, such as deep learning and neural networks, their many uses are evolving – both benevolent and malicious. One practice that has made its way into the latest headlines of the mass media is the concept of profound falsifications, the coup of “in-depth training” and “fake”. Its consequences for potential security breaches and losses for both the banking and payment industries have become a hot topic. Deepfakes, which can be difficult to detect, are now ranked as the most dangerous crime of the future, according to researchers in University College London.

Deepfakes are artificially manipulated images, videos and audio in which the object is convincingly replaced by someone else’s likeness, leading to a high potential for fraud.

These profound forgeries terrify some with their near-perfect reproduction of the subject.

Two stunning deep fakes that have been widely covered include a deepfake by Tom Cruiseborn in the world by Chris Ume (VFX and AI artist) and Miles Fisher (famous impersonator of Tom Cruise), and a deeply fake young Luke Skywalkercreated by Shamuk (performer of deep fakes and YouTuber) and Graham Hamilton (actor), in a recent episode of The Book of Bob Fett.

Although these examples mimic the intended object with alarming accuracy, it is important to note that current technology still requires a skilled imitator trained in the subject’s inclinations and manners to extract a compelling forgery.

Without such a bony structure and the subject’s trademark movements and phrases, even today’s most advanced AI would be hard pressed to make the dipfey work reliably.

For example, in the case of Luke Skywalker, artificial intelligence has been used to reproduce Luke’s voice since the 1980s. Respecheruses hours of recordings of the voice of original actor Mark Hamill during the filming of the film and fans still find the speech an example of “similar to Siri” hollow restWhich should inspire fear.

On the other hand, without prior knowledge of these important nuances of the personality being reproduced, most people would find it difficult to distinguish these profound forgeries from a real person.

Fortunately, machine learning and modern AI work on both sides of this game and are powerful tools in the fight against fraud.

Security vulnerabilities in payment processing today?

While deep counterfeiting poses a significant threat to authentication technologies, including face recognition, in terms of payment processing, there are fewer opportunities today for fraudsters to commit fraud. As payment processors have their own implementations of machine learning, business rules and models to protect customers from fraud, cybercriminals need to work hard to find potential gaps in the protection of payment rails – and these gaps are getting smaller as as each merchant creates more customer relationship history.

The ability of financial companies and platforms to “know their customers” has become even more important with the rise of cybercrime. The more the payment processor knows about past transactions and behavior, the easier it is for automated systems to confirm that the next transaction matches the appropriate model and is likely to be authentic.

Automatic fraud detection in these cases excludes a large number of variables, including transaction history, transaction value, location, and past reversed payments – and does not address a person’s identity in a way that deep counterfeiting can come into play.

The highest risk of deep fraud fraud for payment processors lies in the manual review operation, especially in cases where the transaction value is high.

In a manual review, fraudsters can take advantage of the opportunity to use social engineering techniques to trick the people they check into believing, through digitally manipulated media, that the transactor has the authority to carry out the transaction.

And, as described by The Wall Street Journal, these types of attacks can unfortunately be very effective, with fraudsters even using profoundly counterfeit audio to impersonate a CEO to defraud a UK-based company from almost a quarter of a million dollars.

Because the stakes are high, there are several ways to limit fraud loopholes in general and stay ahead of scammers’ attempts at deep fake hacks at the same time.

How to prevent the loss of deepfakes

There are sophisticated methods to debunk deep counterfeits, using a number various checks to identify errors.

For example, because the average person does not keep pictures of himself with his eyes closed, the bias in the selection of the source images used to teach AI, creating deep falsification, can lead to a fictional object or not blink, not blink at normal speed or just confuse the compound facial expression for blinking. This bias can affect other aspects of deep falsification, such as negative expressions, as people tend not to post these types of emotions on social media – a common source of AI training materials.

Other ways to identify today’s deep counterfeits include spotting lighting problems, differences in weather outside the site’s presumed location, the time code of the media in question, or even differences in artifacts created by capturing, recording or encoding video or audio compared to the type of camera, recording equipment or codecs used.

While these techniques are working now, deep falsification technologies are rapidly approaching a point where they can even mislead these types of validation.

The best processes to combat deep counterfeiting

While deep counterfeits cannot fool other AIs, the best current options to combat them are:

  • Improve training for manual reviewers or enable AI authentication for better detection of deep counterfeits, which is only a short-term technique while errors are still detectable. For example, look for flashing errors, artifacts, repetitive pixels, or problems with the subject making negative expressions.
  • Get as much information as possible about traders to make better use of KYC. For example, take advantage of services that scan the deep web for potential data breaches affecting customers, and mark those accounts to track potential fraud.
  • Prefer multifactor authentication methods. For example, consider using three-domain server protection based on authentication tokens and a password and a one-time code.
  • Standardize security methods to reduce the frequency of manual inspections.

Three “best practices” for security

In addition to these methods, several security practices should help immediately:

  • Hire intellectually curious staff to create the initial basis for building a safe system by creating an environment of rigorous testing, retesting, and constantly questioning the effectiveness of current models.
  • Establish a monitoring group to help you assess the impact of anti-fraud measures, provide peace of mind and provide relative statistical certainty that current practices are effective.
  • Apply continuous A / B testing with phased introductions, increasing the use of the model in small steps until effective. This ongoing testing is crucial to maintaining a strong system and defeating fraudsters with computer-based tools to crush them into their own game.

End of game (for now) against deepfakes

The key to reducing deep counterfeit fraud today is gained primarily by limiting the circumstances in which manipulated media can play a role in transaction validation. This is achieved through evolving anti-fraud tools to limit manual checks and by constantly testing and refining toolkits to stay ahead of well-funded global cybercrime syndicates, one day at a time.

Frame profile
EBANX Vice President of Operations and Data, Ram Rajaram

Ram Rajaramvice president of operations and data at EBANXis an experienced financial services professional with extensive experience in security and analytical topics after leading roles in companies including American Express, Grab and Klarna.

Protecting payments in an era of deepfakes and advanced AI

Previous articleThe largest deals with military equipment and technology in the first quarter of 2022
Next articleSCC expands its unified communications portfolio with the acquisition of Visavvi