综合一区欧美国产,99国产麻豆免费精品,九九精品黄色录像,亚洲激情青青草,久久亚洲熟妇熟,中文字幕av在线播放,国产一区二区卡,九九久久国产精品,久久精品视频免费

Global EditionASIA 中文雙語Fran?ais
Comment

Humans must keep control of artificial intelligence

China Daily | Updated: 2026-03-02 00:00
Share
Share - WeChat

Editor's note: There are growing concerns about the risks of artificial intelligence agents. Wu Hequan, an academician at the Chinese Academy of Engineering, spoke to China Economic Times about how to mitigate them. Below are excerpts of the interview. The views don't necessarily represent those of China Daily.

Large AI models have been widely adopted nowadays, demonstrating strong capabilities in answering questions, generating documents and more. But they are essentially language models that rely primarily on statistical computations to generate the most probable outputs. They lack genuine perception and understanding of the physical world, and are prone to "hallucinations".

More importantly, they struggle when deployed in real industrial scenarios tasked with completing practical assignments.

AI agents have emerged to address these limitations. They possess two key capabilities — closed-loop iteration and toolchain integration. AI agents can form experiential memory through continuous practice, summarize feasible paths to complete tasks, and avoid repeating mistakes. Also, they can access external databases and other models, which makes them more accurate and reliable in handling single tasks.

Their autonomy and ability to collaborate make them more than just tools — they have become new collaborative partners for humans. And while humans are good at high-level decision-making, emotional judgment and innovative thinking, AI agents are good at repetitive work, multi-resource coordination and accurate execution. Furthermore, AI agents, being lightweight, can be deployed on mobile phones, wearables and other devices, making it easy for people to benefit from their collaboration in everyday life and at work.

At present, however, the development of AI agents faces multiple challenges.

First, AI agents lack effective mechanisms to verify what they have learned, easily absorbing false information and forming flawed reasoning logic. Also, collaboration among different agents may lead to conflicts in decision-making and competition for resources, even causing errors and undermining task execution.

Second, the autonomy of AI agents creates risks of overstepping authorized boundaries and acting without human approval. Once problems occur, it will be extremely difficult to determine liability, as the causes may involve multiple aspects, including large model training, the use of toolchains, the agents' memory biases and even ambiguous human instructions.

Third, AI agents, which can utilize backend resources such as apps, databases and toolchains, pose a systemic security risk if hacked. Meanwhile, the agents' in-depth access to user behavior and privacy means that improper data management can easily result in privacy breaches.

The core approach to address these challenges is to improve governance, and use regulations and technological measures to oversee the use of AI agents.

A permission system should be established for AI agents, with clearly defined scopes of authority. For example, full permission may be allowed for simple information queries, while strict access thresholds should be set for operations involving finance and privacy and other areas with risks.

Second, the audit mechanism should be improved to record all the behaviors of an AI agent, including the decision-making process, the use of toolchains and memory updates, enabling rapid tracing to identify those who should be held accountable should problems occur.

Third, authorization should be granted in a gradual manner. This means starting with low-risk, simple tasks and gradually building trust, before expanding the scope of authorization.

Fourth, there should be communication protocols and interface standards for inter-agent collaboration to reduce decision-making conflicts. Regulations should also be introduced to determine the boundaries of permission and improve the consent mechanism for accessing user data.

There are also ways for users to retain control over AI agents. For example, they can require the agents to propose plans instead of executing the plans directly, with the final decisions resting with humans to reduce risks. Ultimately, humans should always retain control over their collaboration with AI.

Today's Top News

Editor's picks

Most Viewed

Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US
专栏| 青阳县| 宜昌市| 寿宁县| 五莲县| 南陵县| 泰来县| 堆龙德庆县| 诸暨市| 道孚县| 藁城市| 临桂县| 綦江县| 原平市| 定南县| 紫金县| 甘谷县| 鲜城| 靖宇县| 青浦区| 郎溪县| 台南县| 巴彦淖尔市| 江川县| 林周县| 肃南| 和硕县| 宜良县| 定陶县| 西乌珠穆沁旗| 东乌珠穆沁旗| 东乌珠穆沁旗| 承德县| 北票市| 遵义市| 杭锦后旗| 桂东县| 静宁县| 蒲城县| 榕江县| 雷山县|