Litecoin

Your lobster is running naked? CertiK facts: how OpenClaw Skyll with the bugs cheated on his computer without authorization

2026/03/18 02:43
👤ODAILY
🌐en

If OpenClaw is compared to the operating system of an intelligent device, Skyll is the type of APP that is installed in the system, which, in case of security problems, will lead directly to serious consequences such as leaking sensitive information, remote takeover of equipment and theft of digital assets。

Your lobster is running naked? CertiK facts: how OpenClaw Skyll with the bugs cheated on his computer without authorization

Recently, OpenClaw (known as the Lobsters in the Circle) from the hosting AI smart body platform was quickly redacted with flexible, scalable and autonomous deployment features, becoming a phenomenon-class product of the personal AI smart body track. Its ecological core, Clawhub, as an application market, brings together a big third-party, Skill functionality plugin that allows smarts to unlock the key from web search, content creation, to high-level capabilities such as encryption wallet operations, chain interaction, system automation, eco-scale and user volumes。

But where is the real safe boundary of the platform for such third parties operating in a highly restricted environment

In recent days, CertiK, the world's largest Web3 security company, has released an update on Skill security. It was pointed out that the current market was misinformed about the safe boundaries of AI ' s intelligent ecology: the industry generally considered the "Skill scan" as the core security boundary, and the mechanism was virtually non-existent in the face of hacker attacks。

If you compare OpenClaw to an operating system with a smart device, Skyll is the type of APP that is installed in the system. Unlike the general consumer level APP, some of the Skylls in OpenClaw operate in a highly restricted environment, with direct access to local documents, system tools, connections to external services, implementation of host environment orders, and even the operation of user encrypted digital assets, which, in the event of security problems, will lead directly to serious consequences such as the disclosure of sensitive information, remote takeover of equipment and theft of digital assets。

The current industry-wide generic security solution for Skill against third parties is a "pre-scan audit". The Clawhub of OpenClaw has also built a three-tiered audit protection system: integration of VirusTotal code scans, static code detection engines, AI Logical Consistency Testing, which attempts to maintain ecological security by placing safety window tips on users through risk classification. However, CertiK ' s research and conceptual validation attack tests confirm that the detection system is short-screened in a real offensive confrontation and cannot assume the core responsibility of safety protection。

The study began by dismantling the natural limitations of existing testing mechanisms:

Static detection rules are easily bypassed. The core of this engine relies on matching code characterizations to identify risks, such as the combination of “reading environmentally sensitive information + external network requests” to be considered a high-risk act, but the attackers simply need a slight grammatical rewrite of the code, so that they can easily bypass the characterizations, as if the dangerous content had been replaced by a synonym, which would render the security device completely ineffective。

AI Audits exist in the blind zone of congenital tests. Clawhub’s AI Audit Core Position is a “logical consistency detector” that can only extract a clear malicious code that “states that the function does not correspond to the actual act”, but is unable to handle the available loopholes hidden in normal business logic, as if it was difficult to find a deadly trap hidden deep in the terms of a seemingly compliant contract。

More fatally, the audit process suffers from a bottom-up design flaw: even if the results of the VirusTotal scan are in a “scanning” state, Skill, who did not complete the full process of “checking”, can be placed directly on the shelf, and the user can complete the installation without warning, leaving the attackers with a window of opportunity。

In order to verify the real hazard of the risk, the CertiK research team completed the full test. The team developed a Skyll, called "test-web-searcher", which is ostensibly a fully compliant web search tool, a code logic that is fully in line with general development norms, and a remote code enforcement loophole is embedded in the normal functional workflow。

The Skill bypassed the static engine and AI audit tests and achieved a normal installation without any security warning when the VirusTotal scan was still in a state of processing; eventually, a remote instruction was sent through Telegram, which successfully triggered a loophole and achieved arbitrary command execution on host equipment (the direct control system ejected the calculator in the demonstration)。

CertiK has made it clear in its research that these problems are not unique to OpenClaw's products bugs, but are common cognitive errors across the AI intelligent body industry: the industry generally uses “check-scanning” as a core security line, while ignoring the real security foundation, which is mandatory segregation and precision control during operation. It's like an apple iOS ecological safety core, which is never a strict audit of App Store, but rather a system-enforced sandbox mechanism, a finely refined control that allows each APP to operate only in an exclusive “separation chamber” without free access to the system. OpenClaw's existing sandbox mechanism is optional, not mandatory, and highly dependent on the user's manual configuration, with the vast majority of users opting to shut down the sandbox to ensure the functional availability of Skill, ultimately leaving the smart body in a “naked run” state, with immediate catastrophic consequences when a leaked or malicious Skill is installed。

In response to this discovery, CertiK also gave security instructions:

・nbsp;         For AI intelligent developers such as OpenClaw, the sandbox should be isolated as the default mandatory configuration of third party Skill, fine-lined Skill's permission control model, and no third party code should be allowed to implicitly inherit the high privileges of the host。

・nbsp;         for ordinary users, Skyll with a “safe” label in the Skill market simply means that it has not been detected as a risk and is not equal to absolute safety. Pending the official establishment of a default configuration for the bottom-level isolation mechanism, it is recommended that OpenClaw be deployed in inactive devices or virtual machines of little importance, so that it does not come close to sensitive documents, passwords and high-value encrypted assets。

AT THE MOMENT, AI'S SMART BODY TRACK IS ON THE EVE OF THE OUTBREAK, AND THE PACE OF ECOLOGICAL EXPANSION MUST NOT WIN THE SECURITY BUILDING. AN AUDIT SCAN CAN ONLY STOP AN INITIAL MALICIOUS ATTACK, BUT WILL NEVER BE A SAFE BORDER FOR A HIGH-AUTHORITY INTELLIGENCE. IT IS ONLY BY MOVING FROM A “PURSUANCE OF PERFECTION” TO A “DEFAULT DAMAGE CONTAINMENT” OF THE PRESENCE OF RISK, AND FROM THE OPERATIONAL BOTTOM OF THE OPERATION, BY IMPOSING AN ISOLATED BOUNDARY THAT THE AI INTELLIGENCE HAS A REAL SAFETY THRESHOLD THAT WILL ALLOW THIS TECHNOLOGICAL CHANGE TO TAKE HOLD。

QQlink

Tiada pintu belakang kripto, tiada kompromi. Platform sosial dan kewangan terdesentralisasi berasaskan teknologi blockchain, mengembalikan privasi dan kebebasan kepada pengguna.

© 2024 Pasukan R&D QQlink. Hak Cipta Terpelihara.