(Web Desk) - Pause for a moment and take a long look into Tilly Norwood’s deep green eyes. They may look human, but they don’t truly see you.
When the video resumes, her movements — graceful yet slightly mechanical — reveal their source: an algorithm predicting motion from vast databases of human gestures.
Tilly, the much-hyped “AI actress” Hollywood is suddenly eager to represent, is no more than a set of computations animated into life.
Her existence is another reminder that despite decades of technological evolution, the uncanny valley remains. Tilly is not an actress but a simulation — a programmed imitation of humanity.
She reflects a growing phenomenon: the rise of digital influencers and performers who transition from social media feeds to screens. The technology to create such characters now sits only a text prompt away.
Yet, as with many human influencers who achieve fame before proving any real skill, Tilly’s performance — stitched from thousands of human samples — lacks authenticity. She looks real for a second, then falters. A bad actor, digital or human, remains unconvincing. Still, the word “yet” lingers, because the conversation around AI always hints at eventual improvement.
The 2023 SAG-AFTRA strike highlighted this anxiety. Lasting 118 days, it was Hollywood’s first serious standoff against AI, as studios began scanning actors to digitally recreate them. New agreements were reached on consent and compensation, but the unsettling truth stayed: technology had made identity something that could be owned, stored, and replicated.
Meanwhile, a new generation of filmmakers, frustrated by creative restrictions, sees AI as liberation. Why hire actors or negotiate contracts when digital substitutes cost only $20 to $200 a month? But this creative freedom hides real costs. Generating 1,000 AI images consumes around 2.9 kWh of electricity — roughly what a laptop uses in 24 hours. Multiply that by millions, and the environmental toll becomes staggering. The illusion of effortless creativity rests on energy-hungry data centers.
Major corporations — Google, Meta, OpenAI, Alibaba, ByteDance, Kuaishou, and Runway — are locked in a race to dominate users’ screens with new AI tools. With free trials and quick results, they make creating something like Tilly almost trivial. Anyone with moderate hardware can now generate realistic human figures, landscapes, or fantasy scenes with open-source software like Stable Diffusion.
Yet, digital creations remain fragile. Their eyes lose focus, lighting shifts between frames, and continuity crumbles. Emotion and instinct — the hallmarks of human performance — cannot be coded. Though new tools such as Runway ML and its Aleph system promise photorealistic control, the results remain inconsistent: one moment breathtaking, the next unusable.
Some Pakistani filmmakers, too, have jumped on the AI trend, hoping to cut costs and simplify production. Brands like Ufone, Dawlance, Zong, and Golden Pearl have used AI-generated characters in their commercials. But these attempts expose the same flaw — impressive visuals with no genuine feeling or coherence. As the writer wryly notes, pixels don’t need collagen.
Pakistan’s problem isn’t the lack of tools but the lack of strategy. The industry’s tendency to follow trends without understanding them often leads to underwhelming outcomes. While AI could theoretically help reduce production costs and revive unfinished projects, in practice it exposes deeper weaknesses — inconsistency, lack of nuance, and overreliance on automation.
Even AI-assisted writing tools like ChatGPT and Gemini can only produce generic ideas unless the human writer reshapes them with emotion and individuality. True storytelling still relies on flaws, contradictions, and personal voice — qualities machines cannot replicate.
For now, AI may assist in small ways — cleaning up effects, saving time, refining visuals — but it remains a collaborator, not a replacement. Until machines learn to dream, doubt, and feel, they will only imitate creation, never embody it.