thumbnail

Why We Need an Honest Conversation About AI

Meru Gokhale
5th Aug 2025

I’ve spent a long time trying to start this newsletter—partly because every time I sit down to write, I realize how charged the conversation around artificial intelligence has become. We’re caught between utopian dreams and dystopian fears, between embracing AI unconditionally or resisting it outright. Yet, as someone who has spent decades in publishing and editorial work, I’ve come to believe deeply that the truth about AI at work lies somewhere much messier—and more interesting—than those extremes.

AI isn’t just coming; it’s already entrenched itself in the workplace. According to a recent Microsoft study, three-quarters of knowledge workers now use AI at work. What’s remarkable is not just how many people are using it, but how quickly it has happened—half of these users began just within the last year. And equally striking: more than half of them conceal this use from their managers, worried about appearing replaceable.

This reality creates a hidden undercurrent we can think of as “Shadow AI,” emerging not from top-down strategy but bottom-up necessity. Employees aren’t waiting for formal policies or sanctioned tools; they’re adopting consumer-grade AI applications quietly, often covertly, driven by an increasing workload and fear of obsolescence.

What results from this secrecy is a profound tension. On one side, employees feel compelled to enhance their productivity with AI but grapple with the fear they’re somehow 'cheating' or undermining their professional worth. On the other, executives maintain sky-high expectations for AI’s productivity boosts—often without offering clear guidance or adequate resources for employees to succeed.  This misalignment not only leads to employee burnout but exacerbates risks, as sensitive data slips through unregulated tools and processes. 

Simply banning AI isn’t the solution either. History offers cautionary tales: the multi-billion-dollar fines imposed on financial institutions for banning messaging apps without secure alternatives illustrate clearly that prohibition only drives activity underground, amplifying risks rather than eliminating them. In 2025, the greater danger isn’t allowing AI use—it’s failing to openly govern it.

So, how do we start this honest conversation?

Firstly, we have to reframe AI usage in the workplace from an ethical question of purity to one of intention and outcome. AI is another productivity tool—akin to calculators, grammar checks, or spreadsheets. We don’t debate the morality of using Excel to sum columns faster than human arithmetic. Similarly, using AI to draft emails, organize data, or structure initial arguments shouldn’t be stigmatized; it should be understood clearly, transparently, and managed responsibly.

Secondly, we need to rethink productivity measurement. AI disrupts the old metrics—hours logged, pages edited, code written—by completing tasks in fractions of the time. New models of thinking are needed to help organizations measure genuine productivity rather than simply activity.

Thirdly, and crucially, we must directly address the psychological impact of AI. Knowledge workers deeply associate their identity and self-worth with intellectual effort. The arrival of AI, capable of mimicking human tasks rapidly, triggers a genuine crisis of identity. But embracing AI does not diminish human value; rather, it emphasizes our most uniquely human abilities: creativity, judgment, nuance, and empathy. AI frees us from mundane tasks, allowing greater focus on what genuinely requires human insight.

Finally, honesty means openly acknowledging ambivalence. AI can indeed be incredibly useful, but it also has significant limitations—fabricated references, bland prose, and “AI slop,” the meaningless, generic output filling our inboxes and feeds. Recognizing both AI’s strengths and its shortcomings is not indecision; it’s responsible realism.

Through this newsletter, I want to explore practical, nuanced insights into the complexities of AI adoption—not to push a specific agenda, but to help navigate this transition thoughtfully. In future issues, I’ll discuss how organizations can craft sensible policies without stifling innovation, how educators can ethically integrate AI into teaching, and how individuals can use AI without losing their intellectual identity.

I promise not certainty but candour: an ongoing, balanced exploration of AI’s implications for our professional lives and identities. I’ll share honest observations from the field—successes, mistakes, and genuine uncertainties. And I invite you to join this conversation openly. Together, we can move beyond hype or fear and shape an intelligent, humane relationship with AI.

The elephant in the server room isn’t going away. Let’s talk honestly about how we live with it.