The whimsical world of Studio Ghibli has captivated audiences for decades with its distinctive hand-drawn animation style, creating unforgettable characters from Totoro to Spirited Away's No-Face. ChatGPT's new image generation feature attracted one million users within its first hour, demonstrating our collective fascination with seeing ourselves reimagined through AI's artistic lens. Yet beneath this harmless-seeming trend lurks a web of potential risks that few users stop to consider before uploading their faces.
Searches for “ChatGPT Studio Ghibli” have skyrocketed by 1,200% over the past week, as social media floods with AI-generated portraits mimicking the beloved art style of Hayao Miyazaki. ChatGPT has seen huge success, with a record surge of 1 million new users in just one hour after launching its new image-generation feature.
But behind the buzz, do you really know how safe it is to upload your face to these AI tools? And could you unknowingly be stepping into a legal grey area when it comes to copyright and personal data?
Christoph C. Cemper, founder of AI prompt management company AIPRM, steps in to break down the hidden risks behind the trend and reveal what you need to know before jumping on the bandwagon - from potential copyright infringement to serious privacy concerns.
Essentially, when you upload a photo to an AI art generator, you’re giving away your biometric data (your face). Some AI tools store that data, use it to train future models, or even sell it to third parties - none of which you may be fully aware of unless you read the fine print.
So does ChatGPT store your data? Yes, it does. OpenAI’s privacy policy clearly outlines that they collect two types of data: Information you provide (personal details like your name, email, and the photos or images you upload), and automatically collected information (device data, usage data, log data).
The reality is, that ‘innocent’ upload to turn your family, friends or couple portraits into Ghibli-style art for fun could mean you're feeding personal information into models that may be used to fine-tune results. Unless you actively opt out of ChatGPT's training data collection or request deletion of your data via settings, they could be retained and used without explicit consent.
Once your facial data is uploaded, it becomes vulnerable to misuse. Images shared on AI platforms could be scraped, leaked, or used to create deepfakes, identity theft scams, or impersonations in fake content. You could unknowingly be handing over a digital version of yourself that can be manipulated in ways you never expected.
In one disturbing instance, a user found her private medical photos from 2013 in the LAION-5B image set - a dataset used by AI tools like Stable Diffusion and Google Imagen - via the site Have I Been Trained.
The growing risk here is real and alarming. This could give fraudsters yet another tool to exploit AI-generated deepfakes. Since the launch of ChatGPT’s new 4.0 image generator, people have even started using it to create fake restaurant receipts. As one X user says, “There are too many real-world verification flows that rely on ‘real images’ as proof. That era is over.”
Creating AI-generated art in the style of iconic brands like Studio Ghibli, Disney, Pixar, Simpsons might seem like harmless fun, but it could inadvertently breach copyright laws. These distinct artistic styles are protected intellectual property, and replicating them too closely could be considered creating derivative works. What seems like a tribute could easily become a lawsuit waiting to happen. In fact, some creators have already taken legal action.
In late 2022, three artists filed a class-action lawsuit against severaI AI companies, alleging their image generators were trained using their original works without permission. As technology continues to evolve faster than the law, efforts are needed to strike a balance between encouraging innovation and safeguarding artists' creative rights.
Many AI platforms bury broad licensing terms in the fine print or use vague language, granting them sweeping permissions to reproduce, alter, and even commercially distribute the content you submit. This means your image - or AI-generated versions of it - could end up in marketing, datasets, or as part of future AI model training.
Watch for key red-flag terms like “transferable rights", "non-exclusive", "royalty-free", "sublicensable rights" and “irrevocable license” - these phrases can grant platforms unlimited use of your image however they see fit, potentially even after you’ve deleted the app.
Christoph C. Cemper, founder of AIPRM, commented on the Studio Ghibli AI trend:
“The rollout of ChatGPT's 4o image generator shows just how powerful AI has become as it replicates iconic artistic styles with just a few clicks. But this unprecedented capability comes with a growing risk - the lines between creativity and copyright infringement are increasingly blurred, and the risk of unintentionally violating intellectual property laws continues to grow. While these trends may seem harmless, creators must be aware that what may appear as a fun experiment could easily cross into legal territory.
“The rapid pace of AI development also raises significant concerns about privacy and date security. With more users engaging with AI tools, there's a pressing need for clearer, more transparent privacy policies. Users should be empowered to make informed decisions about uploading their photos or personal data - especially when they may not realise how their information is being stored, shared, or used."
While the allure of seeing yourself transformed into a Ghibli-esque character is undeniably tempting, the momentary delight of a stylised portrait comes with lasting implications for your digital footprint. Before uploading your next selfie, consider whether that fleeting social media moment is worth potentially contributing to deepfake technology, infringing on artistic copyrights, surrendering control of your biometric data, or agreeing to terms that extend far beyond what you intended.
The viral Studio Ghibli AI portrait trend represents a crucial moment for digital literacy – one that asks us to pause and reflect on the true cost of these seemingly innocent online diversions. As AI tools become increasingly embedded in our daily lives, developing this critical awareness isn't just prudent – it's essential for protecting both our personal privacy and the artistic integrity of the creators whose work we admire.
Credit: https://www.aiprm.com/, , edited for publication.
Black Squirrel Facts: Everything You Need to Know
What's the Difference Between College and University? A Clear Guide
Hind is a Data Scientist and Computer Science graduate with a deep passion for research and development in data analytics and machine learning. With a solid foundation in business intelligence and statistics, Hind has experience working with a variety of programming languages such as Python, Java, and R. Through previous roles in internships and remote projects, Hind has gained expertise in transforming raw data into actionable insights. Focused on advancing the field of data science, Hind contributes to research and articles exploring the latest trends and breakthroughs in R&D.
How to Remember Dreams and Use Them for Creative Inspiration
Desdemona: The AI Rockstar Redefining Human-Machine Creativity
SingularityNET: Pioneering the Path to Decentralised AI
Sophia: The Embodiment of Artificial Wisdom in Robotic Form
The Origins of AGI: Tracing the Evolution of Artificial General Intelligence