The optimistic view of AI sees two challenges to meet: building systems that approach aspects of human intelligence, and ensuring those systems expand rather than erode our capabilities.
I focus on the “missing middle”: systems that help us think better without taking decisions away. The goal is oversight and augmentation, not replacement. That means building tools that teach as they assist, leaving people more capable than when they started.
Augmentation only works if the systems we connect to are trustworthy. My background in Zero Trust security informs everything here—permissions are explicit, access is scoped, and actions are auditable. Clean, well-structured data is just as critical; without it, even the best AI workflows produce noise instead of insight.
Leadership itself will change as organizations manage not only people but also teams of AI agents. Drawing on the Nadler–Tushman Congruence Model, I see future leaders needing both engineering fluency to direct autonomous systems and the judgment to lead the people who remain. The ability to align strategy, work, culture, and technology will be a defining skill for executives in an AI-driven organization.
AI development and automation should follow the principles of humanistic design: dignity first, transparency always, and productivity that strengthens rather than strips away the human element. AI-enabled workflow automation should target the dumb, dirty, dangerous, and tedious work—freeing humans to tackle bigger problems and more creative pursuits. This has been the promise since the first industrial revolution, which raised living standards and expanded our economy. The future must ensure new work is meaningful, sustainable, and accessible; automation that strips people of purpose or dignity is a design failure.