New Delhi (The Uttam Hindu): India’s cybersecurity authority, CERT-In, has issued a significant warning regarding the growing dependence on artificial intelligence (AI) applications, especially for crucial areas like medical and legal decisions. While AI is increasingly embedded in everyday life, CERT-In emphasizes that not all AI apps are secure, with many harboring flaws that cybercriminals can exploit.

The agency alerts users to the risk of fake AI apps created by cyber attackers, designed to deceive users into downloading malware that can steal personal information. To reduce these risks, CERT-In advises using secondary or anonymous accounts to protect privacy when engaging with AI platforms.

One of the primary concerns outlined by CERT-In is the potential for "data poisoning," a practice where cybercriminals introduce incorrect or biased data into AI models, leading to incorrect or harmful outputs. Additionally, AI systems are vulnerable to adversarial attacks and model inversion, which can allow hackers to extract sensitive information or manipulate results.

CERT-In also addressed the issue of AI "hallucinations," where AI systems may produce fabricated or inaccurate information, especially when based on incomplete or flawed data. This becomes a serious concern if AI is relied upon for important decisions in high-stakes fields like law or healthcare. The advisory stresses that AI should be used for general content generation, and not for critical decision-making that requires high accuracy and trustworthiness.

In closing, CERT-In urges users to approach AI apps with caution, avoid sharing sensitive information, and refrain from relying on these systems for important decisions.

The Uttam Hindu

The Uttam Hindu

Next Story