Deepfake Faud CFO threatens: Corporate financing protection – Magic Post

Deepfake Faud CFO threatens: Corporate financing protection

 – Magic Post

Multiple factors and other precautions have become necessary because artificial intelligence provides more sophisticated fraud.

The freezing of phone calls and phone calls is usually attributed to poor service or an external cause. But if you notice unusual white hair around the edge of your financial manager’s beard just before freezing, and when the call resumes seconds, the beard is again black, should you follow his instructions to transfer money?

Perhaps, but not without further verification. The fraudsters may be, with the help of artificial intelligence applications, one day, even-what is called Deepfake. But so far, he can point out that something is wrong, and temporary freezing may actually be done by Amnesty International.

“I was recently testing a platform with a feature designed to help hide artifacts, defects, or synchronization of problems,” Berry Carptter, the chief human risk management strategy in KNOWBE4, is a security awareness and behavior change platform. “The program will freeze the video on the latest good Deepfake frame to protect the identity of a person who does deep. It is clear that some attackers use adaptive strategies to reduce detection when Deepfakes begins failure.”


“There should be no immediate need to deliver a large amount of money without checking first ().

Perry Carpenter, Human Management Strategy, KNOWBE4


To what extent do these attacks succeed or even unclear attempt because companies usually maintain this information under the winding. A major attack last year by CNN and other executive director of companies in Hong Kong for the UK’s engineering company, which is looking for an e -mail, was secretly requested of $ 25 million. He sent money anyway, after a video call with many people who looked and looked like colleagues – but in reality, Deepfakes was.

In another incident reported by The Guardian last year, the fraudsters used a picture available to the audience of Mark Reed, CEO of Giant WPP for ads, to create a fake WhatsApp account. This account was used in turn to prepare a meeting of Microsoft teams that use an audio cloning for the executive director and the imperfection of his personality through a chat window to target a third executive official, in an attempt to seek money and personal details.

A WPP spokesman confirmed the accuracy of the trustee account, but he refused to explain how the fraud was thwarted, with reference only to “this is not something that we are keen to redeem.”

Deepfakes

Unlike Deepfake videos, which are difficult to discover, the sound and video are still actually via social correspondence platforms vulnerable to errors, says Carpetter. While Deepfakes earlier was clear, such as falsification of the face, abnormal flash, or unacceptable lighting, the latest models began to correct these violations in actual time.

Thus, Carpeter does not train customers on technical defects often, as this can lead to a false feeling of safety. “Instead, we need to focus on behavioral signals, contradictions in context, and other factors such as using the increasing feelings to try to obtain a response or reaction,” he says.

Deepfake’s rapid development is a special risk for corporate financing sections, given its control of the goal of the desire of the fraudsters. Stewart Madenick, professor of information technology at the Massachusetts Institute College, says there are different ways to do so safely.

When executive officials in financing companies who deal with large money transfer operations are good knowledge, they can test their voice counterparts or video by asking semi -personal questions. MADNICK asked his alleged colleagues about their opinion “their brother Ben” in a case, when there is no such brother.

MADNICK warns smart, but not always a solution, “The problem is that artificial intelligence will learn about all your brothers.” In the end, all companies must use multiple factors (MFA), which enhances safety by requesting to verify multiple sources; Most large companies have implemented them widely. But until then, some important departments may not use the Ministry of Foreign Affairs to continue for some tasks, as Katie Buswell notes the United States to secure the captain of artificial intelligence in KPMG, which leaves it vulnerable.

“It is important to lead companies to cooperate with their information technology and technology teams to ensure that effective cybersecurity solutions, such as MFA, are more likely to be subjected to Deepfake attacks,” as they urge.

Perry Carpenter
Perry Carpenter, Human Management Strategy, KNOWBE4

Determine multi -faceted fraud

Even with MFA, Devious modes can customize social media and online resources and use artificial intelligence to evoke authentic appearance bills and other documents, as well as Deepfake Video and/or sound, create enough convincing rear stations to persuade executives to make decisions that they later regret. This makes training for CEOs to air conditioning, and they deal with large amounts of money to stop automatically when they receive unusual requests and demand additional verification.

“There should be almost an immediate need to deliver a large amount of money without verifying first through a well -known internal channel,” says Carptter. The axes that communicate via a private phone or email account are also a problem, especially if they resist the transfer of the conversation to the company’s safe systems. He said: “It is important for people to give themselves permission to stop and verify.”

Although two or more investigations help, companies must ensure that the verification sources are safe. Madnick recalls a customer losing money when a fraudster passed a false check. The suspicious, the bank contacted the company’s financing management to verify the transaction, but the fraud fraud has already issued instructions to the phone company to re -direct calls to a number where he verified the check verification.

“Companies can prepare procedures with their phone company that requires them to restore calls without making further verification of the company,” says Madnick. “Otherwise, it is according to the estimation of the phone company.”

Given the attractiveness of companies financing for scammers, Boswell from KPMG emphasizes the importance of keeping pace with emerging threats. Since the financial manager and other senior financial leaders should focus on their urgent duties, the latest research on Deepfake attacks cannot be expected. But companies can create policies and procedures that guarantee this, or other experts who modernize them regularly, raising awareness of financing with the latest types of attacks, internally and in other companies.

MADNICK regularly requests executives to finance companies to raise their hands if they know that their departments have faced electronic attacks. Many do not.

Katie Boswell, KPMG
Katie BoswellThe United States is to secure the leader of the artificial intelligence in KPMG

“The problem is that electronic attacks on average continue more than 200 days before their discovery,” he says. “Therefore, they may think they have not been attacked, but they are not yet aware of it.”

Companies financing can also include Deepfake’s scenarios in risk assessments, including table -based table exercises in the company’s security initiatives. Employees must be encouraged to report even unpleasant attacks, or what they think may have been attacks, and that they may refuse in another way, as Buswell recommends.

“In this way, others in the institution realize that it has been targeted, and what to search for,” she says.

In addition, although C-SUITE executives in large companies may have important general profiles, the information available externally about executives and lower-level departments such as payment accounts and accountable accounts. “Actors are used to threaten this type of information frequently using artificial intelligence, to help address goals through social engineering,” Buswell notes. “If they cannot access this data, they will not be able to integrate it into the attacks.”

Such precautions have become only more important, as Deepfake scammers are distorted and deepened their arrival. While they are widespread in major economies such as the United States and Europe, even countries with a large number of common languages ​​are increasingly exposed.

“Most Turkish criminals may not know, but what is great in artificial intelligence systems is that they can talk about any language,” MADNICK warns. “If you are a criminal, I will target companies in countries that have been targeted less in the past, because they may be less prepared.”

Leave a Reply

Your email address will not be published. Required fields are marked *