
Hyderabad: Hyderabad police commissioner VC Sajjanar has warned the public against prompt injection in Artificial Intelligence (AI) chatbots which could leak information.
Sajjanar explained that malicious commands can be fed to AI chabots, which could leak corporate data and customer details. He further stated that companies are increasingly relying on them due to their benefits such as providing instant answers to customer queries, increasing work speed, and reducing costs. However, there is a new threat lurking behind this technology. That is ‘prompt injection’.
What is prompt injection
Generally, the commands given to AI to work are called ‘prompts’. Cybercriminals are turning these prompts into weapons.
“Cybercriminals are giving ‘malicious prompts’ (harmful commands) to mislead and trick the AI model. In short.. “deceiving AI with words”,” the commissioner explained.
Sajjanar further said that a ‘prompt injection attack’ is a way to confuse AI and extract internal documents, customer records, and system details that should not normally be disclosed.
A major challenge to data security
The commissioner reiterated that injection attacks on AI are a threat to data security. “Currently, many organisations are connecting their AI models and chatbots to key data systems within the organization (CRM data, helpdesk tickets, employee information, financial records),” he said.
He further explained that such information should not be visible to the end user even by mistake. However, a single ‘tricky prompt’ by hackers risks exposing all this confidential information.
Protection with ‘guardrails’
Sharing methods to prevent such injection attacks, he urged organisations to set up ‘prompt guardrails’ (protective shields). “Just one layer of security is not enough, a multi-layer defense approach should be followed,” he added.
Stressing on security, Sajjanar urged organisations to provide safety training to AI and impose hard guardrails to prevent it from providing unnecessary information at the model-level security. Establishment of a system to detect malicious prompts at prompt-level security.
Speaking of system level security, the Hyderabad police commissioner said, “There should be strict controls on the data and APIs that AI is given access to.” He urged organisations to conduct ecurity audits from time to time and restrict data access.
In an appeal to the public, Sajjanar said that If proper security measures are not taken, there is a risk that the operations of organisations will be paralyzed and valuable data will fall into the hands of criminals and be irreparably damaged.
