Sundar Pichai: AI can strengthen cyber defences, not just break them down | 桑达尔•皮查伊:人工智能可以加强网络防御,而不仅仅是破坏它们 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
为了第一时间为您呈现此信息,中文内容为AI翻译,仅供参考。
FT商学院

Sundar Pichai: AI can strengthen cyber defences, not just break them down
桑达尔•皮查伊:人工智能可以加强网络防御,而不仅仅是破坏它们

Private and public institutions must work together to harness the technology’s potential
私营和公共机构必须共同努力,利用这项技术的潜力。
The writer is chief executive of Google and Alphabet 
作者是谷歌(Google)和谷歌母公司Alphabet的首席执行官 
Last year saw rapid and significant technological change powered by progress in artificial intelligence. Millions of people are now using AI tools to learn new things, and to be more productive and creative. As progress continues, society will need to decide how best to harness AI’s enormous potential while addressing its risks. 
去年,由于人工智能的进步,科技发生了快速而重大的变革。现在有数百万人正在使用人工智能工具来学习新知识,提高生产力和创造力。随着进步的持续,社会需要决定如何最好地利用人工智能的巨大潜力,同时解决其中的风险。
At Google, our approach is to be bold in our ambition for AI to benefit people, drive economic progress, advance science and address the most pressing societal challenges. And we’re committed to developing and deploying AI responsibly: the Gemini models we launched in December, which are our most capable yet, went through the most robust safety evaluations we’ve ever done. 
在谷歌,我们的方法是大胆地追求人工智能造福人类、推动经济进步、推进科学发展并解决最紧迫的社会问题。我们致力于负责任地开发和部署人工智能:我们在去年12月推出的双子座(Gemini)模型是我们迄今为止最强大的模型,经历了我们有史以来最严格的安全评估。
On Thursday, I visited the Institute Curie in Paris to discuss how our AI tools could help with their pioneering work on some of the most serious forms of cancer. On Friday, at the Munich Security Conference, I’ll join discussions about another important priority: AI’s impact on global and regional security. 
周四,我访问了巴黎的居里研究所(Institute Curie),讨论我们的人工智能工具如何帮助他们在一些最严重的癌症形式上的开创性工作。周五,在慕尼黑安全会议(Munich Security Conference)上,我将参与关于另一个重要优先事项的讨论:人工智能对全球和地区安全的影响。
Leaders in Europe and elsewhere have expressed worries about the potential of AI to worsen cyber attacks. Those concerns are justified, but with the right foundations, AI has the potential over time to strengthen rather than weaken the world’s cyber defences.
欧洲和其他地区的领导人对人工智能可能加剧网络攻击的潜力表示担忧。这些担忧是有道理的,但是通过正确的基础,人工智能有潜力随着时间的推移加强而不是削弱世界的网络防御。
Harnessing AI could reverse the so-called defender’s dilemma in cyber security, according to which defenders need to get it right 100 per cent of the time, while attackers need to succeed only once. With cyber attacks now a tool of choice for actors seeking to destabilise economies and democracies, the stakes are higher than ever. Fundamentally, we need to guard against a future where attackers can innovate using AI and defenders can’t.
利用人工智能可以扭转网络安全中所谓的防御者困境,即防御者需要在100%的时间内正确应对,而攻击者只需要成功一次。随着网络攻击现在成为寻求破坏经济和民主稳定的行为者的首选工具,风险比以往任何时候都高。从根本上说,我们需要防止这样一种未来:攻击者可以利用人工智能进行创新,而防御者却无法做到。
To empower defenders, we began embedding researchers and AI approaches in Google cyber security teams more than a decade ago. More recently, we’ve developed a specialised large language model fine-tuned for security and threat intelligence. 
为了赋予防御者更多权力,我们在十多年前开始将研究人员和人工智能方法嵌入到谷歌的网络安全团队中。最近,我们开发了一种专门针对安全和威胁情报进行优化的大型语言模型。
We’re seeing the ways AI can bolster cyber defences. Some of our tools are already up to 70 per cent better at detecting malicious scripts and up to 300 per cent more effective at identifying files that exploit vulnerabilities. And AI learns quickly, helping defenders adapt to financial crime, espionage or phishing attacks like the ones that recently hit the US, France and other places. 
我们看到了人工智能如何增强网络防御能力。我们的一些工具在检测恶意脚本方面已经提高了70%,在识别利用漏洞的文件方面提高了300%。而且人工智能学习速度快,帮助防御者适应金融犯罪、间谍活动或钓鱼攻击,就像最近发生在美国、法国和其他地方的攻击一样。
That speed is helping our own detection and response teams, which have seen time savings of 51 per cent and have achieved higher-quality results using generative AI. Our Chrome browser examines billions of URLs against millions of known malicious web resources, and sends more than 3mn warnings per day, protecting billions of users. 
这种速度有助于我们自己的检测和响应团队,他们的时间节省了51%,并且使用生成式人工智能取得了更高质量的结果。我们的Chrome浏览器每天检查数十亿个URL,与数百万个已知的恶意网络资源进行对比,并发送超过300万个警告,保护数十亿用户。
Empowering defenders also means making sure AI systems are secure by default, with privacy protections built in. This technical progress will continue. But capturing the full opportunity of AI-powered security goes beyond the technology itself. I see three key areas where private and public institutions can work together.
赋予防御者权力也意味着确保人工智能系统默认安全,并内置隐私保护。这种技术进步将会持续。但是,充分利用人工智能驱动的安全机会超越了技术本身。我认为私人和公共机构可以在三个关键领域共同合作。
First, regulation and policy. I said last year that AI is too important not to regulate well. Europe’s AI Act is an important development in balancing innovation and risk. As others debate this question, it’s critical that the governance decisions we make today don’t tip the balance in the wrong direction. 
首先,是规章制度和政策。我去年说过,人工智能太重要了,不能不加以良好的监管。欧洲的人工智能法案(AI Act)在平衡创新和风险方面是一个重要的发展。在其他人辩论这个问题的同时,我们今天所做的治理决策不能朝错误的方向倾斜。
Policy initiatives can bolster our collective security — for example, by encouraging the pooling of data sets to improve models, or exploring ways to bring AI defences into critical infrastructure sectors. Diversifying public sector technology procurement could help institutions avoid the risks of relying on a single legacy supplier.
政策举措可以增强我们的集体安全——例如,通过鼓励数据集的共享来改进模型,或者探索将人工智能防御引入关键基础设施领域的方式。多样化公共部门的技术采购可以帮助机构避免依赖单一的传统供应商所带来的风险。
Second, AI and skills training, to ensure people have the digital literacy needed to defend against cyber threats. To help, we’ve launched an AI Opportunity Initiative for Europe to provide a range of foundational and advanced AI training. We’re also supporting innovative start-ups, like the Ukrainian-led company LetsData, which provides a real-time “AI radar” against disinformation in more than 50 countries. 
其次,人工智能和技能培训,以确保人们具备抵御网络威胁所需的数字素养。为此,我们推出了欧洲人工智能机遇计划,提供一系列基础和高级人工智能培训。我们还支持像乌克兰领导的LetsData这样的创新初创公司,在50多个国家提供实时的“人工智能雷达”来对抗虚假信息。
Third, we need deeper partnership among businesses, governments, and academic and security experts. Our Málaga safety engineering centre is focused on cross collaboration that raises security standards for everyone. At the same time, global forums and systems — like the Frontier Model Forum and our Secure AI Framework — will play an important role in sharing new approaches that work. 
第三,我们需要加强企业、政府、学术界和安全专家之间的合作伙伴关系。我们的马拉加安全工程中心致力于跨界合作,提高安全标准。与此同时,像前沿模式论坛(Frontier Model Forum)和我们的安全人工智能框架(Secure AI Framework)这样的全球论坛和系统将在分享有效方法方面发挥重要作用。
Protecting people on an open, global web is an urgent example of why we need a bold and responsible approach to AI. It’s not the only one. Helping researchers identify new medicines for diseases, improving alerts in times of natural disasters, or opening up new opportunities for economic growth are all just as urgent, and will benefit from AI being developed responsibly. Progress in all of these areas will benefit Europe, and the world.
在开放的全球网络上保护人类是一个紧迫的例子,说明我们为什么需要对人工智能采取大胆而负责任的方法。这并不是唯一的例子。帮助研究人员找出治疗疾病的新药、改善自然灾害时的警报或为经济增长开辟新机遇,这些都同样紧迫,并将受益于负责任地发展人工智能。所有这些领域的进步都将造福欧洲乃至全世界。
版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

德国的商业模式失败了吗?

德国的三大主要产业同时陷入低迷,经济也停滞不前。政客们终于清醒过来了吗?
6分钟前

Lex专栏:马斯克利用美国大选出风头

这位亿万富翁的名字没有出现在选票上,但他已利用美国大选吸引了大家的注意力。

暴力是怎样逐渐成为美国大选主题的?

在充斥着“前所未有”的极端言论的竞选季之后,选民们笼罩在紧张氛围中。

英国新税制或使其成为新的“避税天堂”

顾问警告说,英国政府取代非居籍计划的建议将吸引那些寻求短期免税期的人士

Lex专栏:高端电动汽车有望助力小米登上领奖台

小米的新车型可能不是每个人的梦想之车,但这家公司在竞争中处于有利地位。

Lex专栏:巴菲特的苹果交易暴露了伯克希尔的困境

苹果股票给巴菲特带来财富掩盖了伯克希尔缺乏利润丰厚的投资机会的事实。
设置字号×
最小
较小
默认
较大
最大
分享×