How creative mediums find their voice
As LinkedIn’s AI hacks flooded my feed, one pattern was impossible to miss: “Midjourney just gave me 50 logos.” “ChatGPT wrote all my button copy.” We’re in tech’s oldest loop—new tool, same tasks, just faster. It echoed a lesson from photography class: early photographers spent decades making photos look like paintings.
When photography was invented in 1839, practitioners pointed their cameras at the same subjects painters had captured for centuries. They used soft focus and hand-tinting, desperately trying to gain artistic legitimacy by mimicking the established medium. It took nearly 50 years before photographers like Alfred Stieglitz proclaimed that it was “high time that the stupidity and sham in pictorial photography be struck a solarplexus blow.”
Photography had to find its own voice—exactly the opportunity we face with AI.
This pattern repeats throughout history. Cinema spent its first decades as “filmed theater.” Early websites looked like printed newspapers. And today? We have revolutionary AI tools, but we’re mostly using them as faster interns. We’re repeating history, and we don’t even see it.
The four phases of any medium
Every creative medium follows the same predictable arc:
Phase 1: Imitation - New tech mimics what came before
Phase 2: Experimentation - Pioneers start breaking rules
Phase 3: Unique Voice - The medium finds what only it can do
Phase 4: Maturation - It transforms everything else
(These are rough historical averages, not laws of physics.)
Photography took 50 years to freeze time. Cinema needed 30 to discover editing. The web, 20 to go responsive.
VR still hunts for its freeze-time moment; LLMs have already had theirs (and they’re only getting louder).
Most of us are stuck in Phase 1
The evolution is happening faster than we think. In 2024, Figma found that 72% of teams said AI played only a minor role1. By mid-2025, 64% of those same teams had shipped at least one AI-powered feature, up from 28% the year prior2. The 2025 State of AI in Design Report by Foundation Capital shows 89% of designers say AI has improved their workflow3. That’s not incremental progress. That’s a phase shift.
But here’s the pattern: 84% use AI for exploration, while only 39% use it for final delivery3. Great at brainstorms and prototypes; timid with ship-ready work.
Those numbers suggest a mass migration from Phase 1 to early Phase 2, while a fringe is sprinting toward Phase 3 and beyond.
I deliberately switched from Swift to Rust, letting Claude mentor me 24/7. A weekend’s grind collapsed into a single evening sprint. Not because AI was doing my work, but because it was translating between my design thinking and code syntax. Between what I could imagine and what I could build.
This wasn’t automation. It was amplification.
The speed of evolution is unprecedented. According to the same report, 96% of designers are self-taught in AI3, learning from peers and social posts rather than formal training. What took photography clubs months through quarterly journals happens in minutes on Reddit.
The adoption curve is splitting
Here’s what the data doesn’t capture: while most designers are using AI for “exploration,” a small group has already jumped to Phase 3 or 4.
I co-craft prompts in Discord channels where yesterday’s hacks ship as today’s SaaS. Like Krea.ai, where artists use real-time generation to paint with AI as a living brush (think Photoshop, but the brushstroke is generated in real time by the model). Or developers building entire SaaS products by describing functions rather than coding them—like Perplexity’s real-time custom agents.
The real evolution might be happening in the margins. Maybe prompt engineering is the new creative medium. Maybe the conversation itself—the back-and-forth between human intent and machine possibility—is the breakthrough. Six months ago I specced a chat-loop workflow that felt like sci-fi; this week, GPT-4o’s streaming API let me ship it in a day.
The implication hit harder than the code itself.
For the first time, I could imagine being a solo founder. Hitting “deploy” on a functioning SaaS by lunch made something click: it felt like having a team; the usual barriers just weren’t there.
(Yes, most weekend SaaS die—but the cost of trying has collapsed.)
That power is double-edged: models can hollow out junior roles. We won’t slow the shift, but we can shape governance and talent pipelines.
We’re so busy debating whether AI has found its voice that we’re missing the obvious: it already has. We just don’t have the vocabulary to describe it yet.
The teenagers know. The solo founders building million-dollar companies from their bedrooms know. The artists making AI hallucinate between realities know. They’re not waiting for AI to evolve. They’re already living in Phase 4.
Most teams are still polishing AI demo reels while Phase-4 builders ship features that were sci-fi in January. Ship a coffee-break prototype once and you can’t un-learn it.
So, where are you on the curve?
- Figma's 2024 survey of 1,800 users. Full report.
- Figma's 2025 AI Report. Full report.
- The 2025 State of AI in Design Report by Foundation Capital and Designer Fund. Full methodology.