Studio Ghibli AI trend: 4 key risks you need to be aware of

Searches for “ChatGPT Studio Ghibli” have skyrocketed by 1,200% over the past couple of weeks, as social media is flooded with AI-generated portraits mimicking the beloved art style of Hayao Miyazaki.
ChatGPT has seen huge success, with a record surge of 1 million new users in just one hour after launching its new image-generation feature.
But behind the buzz, do you really know how safe it is to upload your face to these AI tools? And could you unknowingly be stepping into a legal grey area when it comes to copyright and personal data?
Christoph C. Cemper, founder of AI prompt management company AIPRM, steps in to break down the hidden risks behind the trend and reveal what you need to know before jumping on the bandwagon – from potential copyright infringement to serious privacy concerns.
Below are four key risks you should be aware of before uploading your photo to AI tools.
-
Your face becomes data – and that data can travel
Essentially, when you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties – none of which you may be fully aware of unless you read the fine print.
So does ChatGPT store your data? Yes, it does. OpenAI’s privacy policy clearly outlines that they collect two types of data: Information you provide (personal details like your name, email, and the photos or images you upload), and automatically collected information (device data, usage data, log data).
The reality is, that ‘innocent’ upload to turn your family, friends or couple portraits into Ghibli-style art for fun could mean you’re feeding personal information into models that may be used to fine-tune results. Unless you actively opt out of ChatGPT’s training data collection or request deletion of your data via settings, they could be retained and used without explicit consent.
-
Your image could contribute to the deepfake epidemic
Once your facial data is uploaded, it becomes vulnerable to misuse. Images shared on AI platforms could be scraped, leaked, or used to create deepfakes, identity theft scams, or impersonations in fake content. You could unknowingly be handing over a digital version of yourself that can be manipulated in ways you never expected.
In one disturbing instance, a user found her private medical photos from 2013 in the LAION-5B image set – a dataset used by AI tools like Stable Diffusion and Google Imagen – via the site Have I Been Trained.
The growing risk here is real and alarming. This could give fraudsters yet another tool to exploit AI-generated deepfakes. Since the launch of ChatGPT’s new 4.0 image generator, people have even started using it to create fake restaurant receipts. As one X user says, “There are too many real-world verification flows that rely on ‘real images’ as proof. That era is over.”
-
Ghibli-Style art could land you in a copyright minefield
Creating AI-generated art in the style of iconic brands like Studio Ghibli, Disney, Pixar, Simpsons might seem like harmless fun, but it could inadvertently breach copyright laws. These distinct artistic styles are protected intellectual property, and replicating them too closely could be considered creating derivative works. What seems like a tribute could easily become a lawsuit waiting to happen. In fact, some creators have already taken legal action.
In late 2022, three artists filed a class-action lawsuit against several AI companies, alleging their image generators were trained using their original works without permission. As technology continues to evolve faster than the law, efforts are needed to strike a balance between encouraging innovation and safeguarding artists’ creative rights.
-
You may be signing away more rights than you realise
Many AI platforms bury broad licensing terms in the fine print or use vague language, granting them sweeping permissions to reproduce, alter, and even commercially distribute the content you submit. This means your image – or AI-generated versions of it – could end up in marketing, datasets, or as part of future AI model training.
Watch for key red-flag terms like “transferable rights”, “non-exclusive”, “royalty-free”, “sublicensable rights” and “irrevocable license” – these phrases can grant platforms unlimited use of your image however they see fit, potentially even after you’ve deleted the app.
Says Christoph C. Cemper, founder of AIPRM, commenting on the Studio Ghibli AI trend:
“The rollout of ChatGPT’s 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk – the lines between creativity and copyright infringement are increasingly blurred, and the risk of unintentionally violating intellectual property laws continues to grow. While these trends may seem harmless, creators must be aware that what may appear as a fun experiment could easily cross into legal territory.
“The rapid pace of AI development also raises significant concerns about privacy and data security. With more users engaging with AI tools, there’s a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data – especially when they may not realise how their information is being stored, shared, or used.”