Introducing Monogram, a new user interface for AI
The history of personal computing isn't just about faster chips or smarter code that can beat us at chess. It's about the moments when machines stopped feeling like alien overlords and actually made sense to humans.
Every big leap in personal computing has the same story: someone figured out how to make the weird obvious, the complicated simple, and the impossible look inevitable. Let’s rewind to 1968, when Engelbart quietly blew everyone’s minds and set the stage for everything that came after.
1968: Engelbart's Demo

The Macintosh
On December 9, 1968—more than half a century ago—Douglas Engelbart, a Stanford researcher tucked away in Menlo Park, California, walked on stage and casually changed the course of computing. The event is now called the “Mother of All Demos”, which is basically the Woodstock of user interfaces.
What did Engelbart show off? Essentially the secret recipe for the modern graphical interface: The mouse, which has since evolved into a trackpad, touchscreen, and eventually us yelling at Siri. Windows, not the blue screen kind, the square-on-a-screen kind. A navigation system, the hyperlinks, these blue links you're so familiar with if you were born in the last century. Graphics that made computers look less like military calculators and more like something you'd actually want in your house.
This mind-blowing demo inspired Xerox to create the Alto, the first real graphical computer. And then, in true Silicon Valley fashion, Steve Jobs saw it, loved it, and thought: “Thanks, I’ll take it from here.” That inspiration became the Apple Lisa, which—though kind of a flop—set the stage for the Macintosh in 1984, and with it, the birth of mainstream personal computing.
1993: The Browser

Early web browser interface
Then came the Browser, in 1993, when a young Marc Andreessen co-authored Mosaic, the program that cracked the web wide open and gave ordinary people their first real way in. He then went on to launch Netscape, which turned the internet into a movement.
He took a system built for academics and engineers and turned it into something anyone could click, see, and explore. Words lived alongside images, links felt like doors. What made the browser revolutionary was that it humanized the internet for the rest of us.
2007: The iPhone

iPhone multi-touch interface
And finally, in 2007, Apple dropped the iPhone, and suddenly, the way we interacted with computers changed forever, again. It wasn't just smaller or prettier—it was touchable. Multi-touch lets us pinch, swipe, scroll and zoom, turning fingers into the ultimate input device.
For the first time, you didn't need to learn how to use a computer—the interface literally bent to your instincts. Kids, grandparents, everyone got it instantly. That single unlock made the smartphone not just a gadget, but the most personal computer ever built.
What's next?
Today, there's a new kid in town—and yes, it's the elephant in the room: AI. It burst into the mainstream with Sam Altman's ChatGPT, but let's be honest: its interface feels like a command line system from the 1980s.
It's useful, but it's also… homework. The same way personal computing felt before Steve Jobs unveiled the Macintosh, or the internet before Marc Andreessen gave us the browser.
It doesn't have to stay that way. We can invent a new paradigm—a new version crafted with human ingenuity that makes AI more human, approachable, and collaborative. We're working on a new secret recipe that needs three new ingredients.
1. An AI-Native UI
UI-first, because seeing beats reading.
Our brains are wired for vision. Half of our neural horsepower is dedicated to visual input. Evolution didn't train us to parse bullet points—it trained us to spot tigers in the bushes. That's why we process visuals 60,000 times faster than text.
A visual UI means comparison becomes easy, patterns pop out, things are easier to remember, and cognitive load is lower. People remember visuals far better than words—you'll forget the description of the hotel carpet but remember the hideous photo forever.
Reading is work. Seeing is instinct. That's why pilots have dashboards with a visual altitude indicator, not a 500-line paragraph saying "altitude is stable." Interfaces should lighten our load, guide us and even inspire us, not turn every task into homework.
2. Privacy First
AI only gets better if it knows you. But trust is earned. You wouldn't spill your secrets to someone unless you trusted them like your doctor, your lawyer, or your mom.
That's why a new AI must be: a best friend—remembering what actually matters, a CIA agent—redacting and encrypting like your life depends on it, and a judge—keeping everything accountable.
And it must respect your right to be forgotten. You're in control—not the other way around.
3. Designed for Collaboration
Most things you do, you do with other people. Yet current AI often feels like it wants to be your therapist, your boyfriend, or your imaginary friend. That's… awkward.
We see a different future, one where AI helps you plan the dinner party everyone actually shows up to. An AI that smooths out group projects at work so they feel less like herding cats. An AI that frees you from boring tasks, so you can spend more time with people, not less.
This isn't about efficiency for efficiency's sake. It's about using AI to foster connection, collaboration, and maybe even fewer group texts that start with "so where are we eating"?
The Crossroads

The future of AI interfaces
AI is now at the same crossroads. To unlock its real potential, it needs its own "Mother of All Demos"—a crafted, human-first interface that makes it approachable, trustworthy, and collaborative.
Engelbart gave us the mouse. Jobs gave us multi-touch. Andreessen gave us the browser. The question now is: who will give AI its face?
We're a team of engineers and designers tucked away in San Francisco, obsessively crafting and building the new face of AI.