Google CEO Sundar Pichai speaks in conversation with Emily Chang during the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The ATIS Summit is being held in San Francisco and will continue until November 17.

Justin Sullivan | Getty Images News | Getty Images

Munich, GERMANY – Rapid advances in artificial intelligence could help strengthen defenses against cyber security threats, according to Google CEO Sundar Pichai.

Amid growing concern about the potentially malicious uses of AI, Pichai said intelligence tools can help governments and companies speed up the detection of — and response to — threats from hostile actors.

“We are right to be concerned about the impact on cyber security. But AI, I think, is actually, counterintuitively, strengthening our defenses in terms of cybersecurity,” Pichai told delegates at the Munich Security Conference late last week.

Cybersecurity attacks are growing in volume and sophistication as malicious actors increasingly use them as a way to exercise power and extort money.

Cyber ​​attacks cost around the global economy 8 trillion dollars in 2023 – an amount expected to grow to $10.5 trillion by 2025, according to cyber research firm Cybersecurity Ventures.

January one report of Britain’s National Cyber ​​Security Center – part of GCHQ, the country’s intelligence agency – said AI will only increase these threats, lowering the barriers to entry for cyber hackers and enabling more malicious cyber activity, including ransomware attacks.

“AI disproportionately helps people who defend themselves because you get a tool that can affect them at scale.

Sundar Pichai

CEO of Google

However, Pichai said AI also reduces the time it takes defenders to detect and respond to attacks. He said this would reduce what is known as the defenders’ dilemma, where hackers only need to succeed once on a system, while a defender needs to be successful every time to protect it.

“AI disproportionately helps the people defending themselves because you get a tool that can affect it at scale versus the people trying to exploit,” he said.

“So in a way we’re winning the race,” he added.

Last week, Google announced a new initiative offering AI tools and infrastructure investments designed to improve online security. A free, open-source tool called Magika aims to help users detect malware – Malware – The Company said in a statement, while a white paper offers measures and research and creates guardrails around AI.

Pichai said the tools are already being used in the company’s products, such as Google Chrome and Gmail, as well as in its internal systems.

“AI is at a definitive crossroads – one where policymakers, security professionals and civil society have the chance to finally tip the balance of cybersecurity from attackers to defenders.

The release coincided with the signing of a pact by major companies at the MSC to take “reasonable precautions” to prevent AI tools from being used to disrupt democratic votes in 2024, a whopping election year, and after this.

Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and X, formerly Twitter, were among the signatories to the new agreement, which includes a framework for how companies should respond to AI-generated “deep fakes” designed to mislead voters.

It comes as the internet becomes an increasingly important sphere of influence for both individuals and state-sponsored malicious actors.

Former US Secretary of State Hillary Clinton on Saturday described cyberspace as a “new battlefield”.

“The technological arms race has just gone up another notch with generative AI,” she said in Munich.

“If you can run a little faster than your opponent, you’ll do better. That’s what AI really gives us defensively.

Mark Hughes

President of Security at DXC

A report published last week by Microsoft found that state-backed hackers from Russia, China and Iran used the OpenAI Large Language Model (LLM) to improve their efforts to spoof targets.

Russian military intelligence, Iran’s Revolutionary Guard, and the governments of China and North Korea are said to have relied on the tools.

Mark Hughes, president of security at IT services and consulting firm DXC, told CNBC that bad actors are increasingly relying on a ChatGPT-inspired hacking tool called WormGPT to perform tasks such as reverse engineering code.

However, he said he also sees “significant benefits” from such tools, which help engineers detect and contain engineering attacks quickly.

“It gives us an opportunity to accelerate,” Hughes said last week. “Most of the time in cyberspace, what you have is time that attackers have an advantage against you. This is often the case in any conflict situation.

“If you can run a little faster than your opponent, you’ll do better. That’s what AI really gives us defensively right now,” he added.

Germany has benefited from the

https://www.cnbc.com/2024/02/23/ai-can-help-defend-against-cybersecurity-threats-google-ceo-sundar-pichai.html