(STL.News) For years, the guiding mantra in digital design was “make it simple.” From the rise of minimalism to the obsession with frictionless onboarding, user experience revolved around removing barriers. But as artificial intelligence begins to shape what users see, say, and do, simplicity is no longer enough.
The next challenge for UX designers is not clarity or usability. It is trust.
“AI has changed what design means,” says Osman Gunes Cizmeci, a New York-based UX and UI designer who studies the relationship between human behavior and emerging technology. “We’re no longer just designing interfaces. We’re designing relationships between people and systems that think for themselves.”
A New Kind of User
The rise of AI-assisted products such as writing tools, creative platforms, and adaptive dashboards has blurred the definition of the “user.” Software no longer waits for input. It learns, predicts, and sometimes acts on behalf of the person behind the screen.
“When I talk about design now, I talk about shared agency,” Cizmeci says. “It’s not just about what the user does, but what the system decides to do in return. That’s a very different interaction model.”
Recent examples like Microsoft Copilot, Notion AI, and Google’s Duet AI show how quickly this shift is unfolding. Users rely on systems that summarize, recommend, and automate tasks that once required their full attention. For Cizmeci, this raises a crucial design question: When should a product act, and when should it step aside?
“The real UX question isn’t how to make something easier,” he says. “It’s when to give control back.”
When Help Becomes Uncertainty
While adaptive systems promise personalization, they also introduce a subtle risk: unpredictability. When an interface learns silently, rearranges itself, or generates output without clear rationale, the user’s confidence begins to erode.
“People don’t like surprises in software,” Cizmeci explains. “They like responsiveness, but they also like to understand what’s happening. The line between helpful and unsettling is thinner than most teams realize.”
He recalls working on projects where personalization features tested well in isolation but failed after launch because users felt disoriented. “We’d hear feedback like, ‘It’s doing too much for me,’ or, ‘It’s learning too fast.’ That told us we didn’t design the communication layer properly,” he says.
In adaptive design, Cizmeci argues, the interface’s explanations are as important as its decisions. “If the system makes a change, tell the user why. It’s that simple,” he says. “Transparency is the new UX currency.”
The Feedback Loop Problem
The push toward AI-driven products has also changed how designers approach iteration. With traditional software, testing happens before launch. With adaptive systems, the feedback loop is continuous.
“The product is learning from users while users are learning from the product,” Cizmeci says. “That creates a moving target. The experience you test on Monday might behave differently by Friday.”
To manage this, Cizmeci now maps “decision points” early in the design process. He defines where the system can act autonomously, where it should request permission, and where users must retain control. “You have to visualize that logic before you build,” he says. “Otherwise you end up with invisible behaviors that confuse people.”
This approach mirrors a growing sentiment in UX circles that design now needs to include system ethics, not just aesthetics. Designers are increasingly part of discussions once limited to engineers, such as data collection, algorithmic bias, and model transparency.
“Designers can’t sit out of those conversations anymore,” Cizmeci says. “If you don’t understand how your system learns, you can’t design the experience responsibly.”
The Rise of Ethical UX
Globally, UX teams are beginning to integrate “ethical checkpoints” into their workflows. These are structured reviews that test not only usability but fairness, inclusion, and explainability. Major tech firms are publishing design guidelines for trustworthy AI, while smaller studios are creating their own principles for adaptive products.
Cizmeci sees this as both overdue and necessary. “For a long time, design was treated as decoration for technology,” he says. “Now it’s becoming the voice of accountability.”
He often uses the term “ethical affordance” to describe what design should provide in AI systems. “An ethical affordance gives users a sense of choice and awareness,” he explains. “It lets them see the system thinking, and decide whether to accept or challenge it.”
A Human-Centered Reset
The irony, Cizmeci notes, is that as systems grow more intelligent, users crave experiences that feel more human. “People want to feel seen and respected,” he says. “They don’t just want efficiency; they want empathy.”
He points to recent design trends that favor warmth and transparency, such as natural language explanations, visible learning indicators, and adaptive settings that can be turned off easily. “It’s about bringing emotion and honesty back into the interface,” he says. “Designing for trust means designing for understanding.”
The Path Ahead
Cizmeci believes this shift will redefine what it means to be a designer in the coming decade. “Our job isn’t just to make things usable anymore,” he says. “It’s to make systems comprehensible.”
That role, he argues, will only grow more important as AI becomes invisible. “The more powerful the system, the more it hides behind the surface,” he says. “Design is how we bring that back into view.”
For now, he remains optimistic. “The tools are changing, but the principles stay the same,” he says. “People need to feel safe, respected, and in control. If we can design for that, everything else will follow.”
And that, he says, might be the defining challenge and opportunity for UX in the age of intelligent systems.







