AI-generated caricatures gain popularity across social platforms
Viral AI caricature trends raise cybersecurity concerns as experts warn that sharing personal context can enable targeted fraud and social engineering.
AI-powered caricature creation has emerged as a fast-growing trend across major social media platforms, with users prompting generative tools to produce illustrated versions of themselves based on personal photos and contextual information. These images often depict individuals in professional or domestic settings, drawing on details such as occupation, lifestyle, or interests to create highly personalised visual outputs. The format has gained particular traction on Instagram, TikTok, and LinkedIn, where such posts are positioned as light-hearted expressions of identity and creativity.
Unlike traditional photo filters or basic image edits, these AI-driven prompts frequently rely on extensive contextual input. Users are encouraged, either explicitly or implicitly, to provide background information so the system can generate more “accurate” or “authentic” results. In many cases, this extends beyond a single image to include workplace references, daily routines, locations, and personal relationships. The appeal lies in seeing a stylised portrait that reflects both appearance and life narrative, rather than a purely aesthetic transformation.
However, cybersecurity specialists warn that the underlying mechanics of this trend introduce risks that may not be immediately visible to users. The same contextual richness that enhances image quality also creates a detailed digital profile, one that can be repurposed beyond its original creative intent. As generative AI tools increasingly combine image, text, and behavioural data, the boundary between playful self-expression and unintentional data disclosure becomes harder to distinguish.
Personal context becomes fuel for sophisticated social engineering
According to experts at Kaspersky, the trend exposes users to heightened risks of identity impersonation and targeted fraud. Each piece of personal context shared with an AI system contributes to a broader profile that can reveal habits, professional responsibilities, and social connections. When aggregated, this information can be exploited by cybercriminals to design scams that feel credible and personalised rather than generic.
The risk lies in how convincingly such data can be reused. A phishing attempt that references a specific employer, job title, or family member is more likely to gain trust than a mass-distributed message. As attackers increasingly rely on social engineering rather than technical exploits, contextual accuracy becomes a powerful tool. AI-generated caricatures, built on detailed prompts, effectively surface the same information attackers seek to extract through reconnaissance.
This concern is particularly pronounced in the Asia Pacific region. While AI adoption is accelerating rapidly, with 78% of professionals using AI tools on a weekly basis compared with a global average of 72%, technical literacy does not always advance at the same pace. This imbalance creates conditions in which sophisticated scams can succeed, especially when users underestimate how their data may be repurposed. The ease of participation and the social validation associated with sharing such images further reduce friction, making caution less likely to be applied consistently.
Data persistence and security implications for users
Beyond the immediate output of a caricature, interactions with generative AI platforms often involve broader data collection. Depending on the service, this can include original images, prompt text, usage history, and certain technical identifiers such as device information or interaction patterns. While some data is retained to operate and improve the service, users may not always be aware of how long their information is stored or whether it contributes to future model training.
Kaspersky cautions that this persistence increases exposure over time. Content shared in a moment of experimentation may remain accessible well beyond its perceived lifespan, expanding the window in which it could be misused. As AI platforms evolve, previously submitted data may also gain new relevance or value, particularly when combined with information from other sources.
“This viral trend of caricature creation of our lives may seem like harmless fun, but it is effectively a voluntary briefing for cybercriminals. Every time users in APAC prompt an AI with details about themselves just to see a clever illustration, they are handing over the blueprints for a perfect social engineering attack,” said Adrian Hia, Managing Director for Asia Pacific at Kaspersky.
“In a region where AI adoption is leading the world but technical literacy is still catching up, these digital portraits are becoming dangerous maps. We are essentially giving scammers the ‘context’ they need to turn a generic phishing email into a highly convincing, personalised scam that can bypass even a cautious user’s defences,” Hia added.
Security experts advise a more restrained approach when engaging with such tools. Limiting identifiable details in prompts, avoiding images that reveal organisational affiliations or physical locations, and excluding information about family members can reduce exposure. Reviewing platform privacy policies and understanding how data may be retained or reused is also critical. Complementary digital protection tools can help mitigate risks from malicious links, phishing attempts, and other attack vectors associated with personalised scams, but they do not replace the need for informed user behaviour.





