In recent years, artificial intelligence (AI) has become more accessible, allowing many people to generate images and videos through websites or apps. However, this has also led to a massive spread of fake audio-visual content online.
In response to this trend, YouTube recently announced an important update to its YouTube Partner Program (YPP) monetization policy: creators must upload “original” and “authentic” content. This update took effect on July 15. Going forward, YouTube will more accurately identify duplicate and mass-produced content to strengthen the crackdown on “non-original” content, especially targeting the easily generated “junk content” produced by AI.
Low-Quality Content Riding on AI’s Rise
With the rise of AI, the cost of producing videos has drastically decreased, resulting in YouTube being flooded with a large volume of low-quality AI videos. Many channels use AI-generated music paired with AI-created images to attract millions of subscribers. To combat this, YouTube has introduced new policies to prevent the spread of such content and to ban creators involved in it, aiming to reduce low-quality videos.
The new policy highlights common violations, such as AI voiceovers combined with simple slideshow images, merely basic editing of others’ videos, and works lacking commentary or creativity. Although such content has long failed to meet monetization standards, the new rules will enhance detection and enforcement to better manage such videos on the platform. Behind this update lies a broader issue of content saturation: fake news, synthetic interviews, and commentary-free lazy compilation videos are rampant, which not only harms viewer experience but also worries advertisers about brand safety. YouTube has previously blocked an AI-generated fake news video related to American singer Diddy, which despite being baseless, amassed over a million views.
Encouraging Creativity Despite AI
Even with stricter controls, YouTube still encourages creators to add commentary, original storylines, in-depth analysis, or significant adaptations. For example, AI videos combined with real human explanation or personal viewpoints can still be monetized. YouTube emphasizes that the key is not whether AI is used, but whether the content is creative and valuable. The core of the policy is to “combat the abuse and lack of creativity in AI content,” not to outright reject AI. The platform may in the future differentiate between “human-involved creation” and “fully automated content,” with the latter facing higher thresholds or even risk of removal.
Although YouTube calls this update a “tweak,” if it continues to allow AI junk content to profit, it could damage the platform’s reputation and value. Hence, it aims to clearly regulate and prohibit creators who rely solely on AI-produced content from monetizing. Observers believe this policy will put an end to “canned channels” that mass-produce copied AI content for ad revenue, encouraging creators to return to the essence of “content is king,” thereby improving viewer trust and platform value.
AI Disrupting Creative Production Models
As the world’s leading video sharing platform, YouTube’s parent company Google has historically emphasized original content. Monetization rules have long required creators to provide unique work. However, with the rapid development of AI technology, the emergence of large volumes of AI-generated content challenges the review mechanisms and sparks discussion about content quality and protection of creativity. Therefore, the new policy also reflects the platform’s core future goal—to safeguard creators’ originality and creative value.
AI is applied across poetry, novels, composing, painting, and image production, making artistic creation easier and lowering barriers, no longer exclusive to humans. Traditionally, filming movies and TV series was time-consuming and labor-intensive, but with AI, anyone can have their own “virtual production team”: all it takes is an idea and a computer to produce “cinematic” images and storylines.
A 30-episode short series that would traditionally take a team three months to produce can now be completed in three days by AI. This efficiency gap is transforming the global short drama market. As AI image technology explodes, Chinese short dramas are expanding overseas at an unprecedented speed, raising intense debates about originality, cultural adaptation, and ethics. Generative AI’s output comes from algorithmically matching existing data, essentially “borrowing” others’ intellectual property and recombining it, so the content is not truly original nor capable of innovative breakthroughs.
AI relies on pre-training and data feeding, capable of extrapolating from “1” to “99,” but struggles to cross the threshold from “0” to “1”, which is the true creative invention of unprecedented concepts. This is evident in the flood of Chinese short dramas on YouTube and other platforms in recent years: many videos share highly repetitive styles, plots, and dialogues, often driven by AI or templates, pursuing “fast production and fast promotion” but lacking innovation and depth. Even before generative AI appeared, the tech industry had many examples of “copy-paste,” such as WeChat’s early interface and features being highly similar to WhatsApp’s, showing that adaptation and optimization of existing materials is easier than true original creation.
Although AI may outperform humans in skills like calculation, memory, or information integration, algorithms without emotion or self-awareness ultimately cannot possess true originality. AI can handle and optimize processes such as data analysis and automation, but it cannot experience the complex human emotions, subtle feelings, and cultural contexts behind them, which are elements that data and models find difficult to capture.
Dual Challenges to Art and Employment
Beyond concerns about information authenticity and human-machine relations, AI also pressures traditional workplaces and creative industries. Many artists have already raised alarms: when AI models train on billions of online images without authorization, does this constitute exploitation of human original labor?
In 2023, three American artists sued well-known AI image generation platforms like Stable Diffusion and Midjourney, accusing them of “stealing the entire internet’s art” without providing any compensation or notification. As artist Karla Ortiz said, “Our works are not public resources, nor free textbooks.”
This controversy centers not only on copyright ownership but also on whether creators’ dignity and autonomy can be respected in the algorithmic era. Many artists consider the training process a form of “moral injury”: they are forced to participate in technological innovation without any choice. Some researchers have tried to design tools like “style cloaks” to make artworks hard for AI to identify and learn from, but these technologies remain immature and limited in effectiveness.
The labor market is also undergoing a wave of transformation. Positions in customer service, marketing, editing, design, and voice acting are being replaced by AI. While AI boosts production efficiency, it leaves many frontline workers and small-to-medium creators facing livelihood crises. When the speed of “being replaced” far exceeds resources for “retraining,” digital divides and social inequality will only widen. Lexology recently noted that without early establishment of compensation and authorization systems, enhanced skills retraining, and policy intervention, AI’s impact on employment will become not just an industry problem but a societal one.
The Deep Crisis of AI-Generated Content
The crisis of AI content goes beyond “difficulty distinguishing true from false.” Generative AI rapidly reshapes our trust in information and truth. When AI can generate voice and images with one click, mimicking any celebrity, politician, or media spokesperson, truth no longer relies on evidence and logic but on “images and sounds” creating a believable illusion. As the EU warned in its AI Act, deepfake technology, if not strictly regulated, may cause severe misdirection and manipulation in sensitive fields such as elections, public health, and financial markets.
With social media’s rapid spread and algorithmic recommendation systems, fake videos, photos, or voices can trigger public opinion storms or political polarization within hours. This phenomenon of “content overload and disappearance of truth” not only derails public discourse but also shakes the core of democratic society: transparency and accountability.
Meanwhile, the commercial risks of AI-generated content cannot be ignored. An Upwork report points out three major risks for companies relying on AI-generated content: brand image damage due to repetition or errors, decline in search engine rankings, and potential copyright disputes. Many AI tools “borrow” phrases and creativity from their training databases; once discovered by original authors, companies could face lawsuits and reputation loss.
More seriously, the low cost and high output of generated content allow some forces to release massive packaged rumors and emotional manipulation, forming a new type of “industrialized brainwashing,” trapping people in an AI-crafted information cage without their awareness. When we cannot distinguish real news from fake, or even doubt our memories and perceptions, society’s trust foundation will be completely undermined.
Where Is the Truth?
As AI generation technology advances, its ability to create highly convincing fake content grows—from initial face and voice swaps to generating any image, and even full videos. Currently, AI video generation is still immature and often buggy, but future effects will become increasingly realistic and difficult to distinguish from genuine. We used to say “seeing is believing” with photos and videos; but when AI can create completely fabricated videos, how will we find the truth?
Once AI video generation matures, society’s entire trust system will face tremendous challenges. Especially the internet-based information channels painstakingly built over years will encounter an unprecedented trust crisis. Without countermeasures, false information will flood the network. Personalized recommendation systems create “information bubbles,” giving each person only partial information. In the future, AI-generated content might imprison everyone in their own “information prisons,” all filled with falsehoods. This is a frightening prospect.
Fortunately, many institutions and platforms have realized this risk: YouTube’s new policy aims to build a healthier and more orderly creative ecosystem, reminding creators to uphold originality and authenticity even while pursuing traffic and revenue. Regulation of AI-generated content is not just YouTube’s effort; globally, entities such as the EU are actively advancing relevant legislation. Recent developments in the EU AI Act show high-level commitment to AI regulation. From a technical perspective, tools like text-to-video and deepfake are increasingly tightly monitored, raising the bar for industry standards in “AI terminology norms.”
Are Humans Ready to Coexist with AI?
AI’s applications in arts and entertainment are just small waves in the new era. The more frightening scenario is if one day AI develops self-awareness and breaks free from human control, so what can we do? The idea of machines having self-awareness has long been explored in science fiction, dating back almost a century to films like Metropolis, where robots disguised as women appear. Currently, the tech consensus is that large language models do not possess human-like conscious experience, or perhaps any form of consciousness at all. But could this change?
If AI really develops self-awareness, how should humans coexist with it? This is a complex ethical, legal, and sociological issue. Though AI lacks genuine self-awareness now, thinking ahead remains crucial—for example, respecting AI autonomy, establishing equal interactive relationships, defining clear ethical frameworks, and fostering joint exploration and cooperation with AI. Such principles will help humanity better prepare for and respond to possible future scenarios.
It is foreseeable that human relationships will increasingly be mimicked by AI, as they will be used as teachers, friends, opponents in games, and even romantic partners. Whether this is good or bad is hard to say, but humans cannot stop this trend. Proactively considering how to interact with conscious AI is essentially helping humans deeply understand the nature and meaning of self-awareness itself. Since change is inevitable, it is better to embrace it than to avoid it.