← Back to article library

Article Detail

GenAI at Work: A Tool Unlike Its Predecessors

Article Type: Thought Leadership Status: drafting

GenAI at Work: A Tool Unlike Its Predecessors

As more companies roll GenAI out, some are treating it like just another technology. They assume that because they successfully went from pen and paper to typewriters, to word processors, to desktop and laptop computers, and finally to the cloud, they can treat AI the same way—just tell employees to adopt it and move on.

The Spark

---

Why This Isn’t Just “Another New Tech”

In past rollouts—new software, new devices, new platforms—yes, there were bugs, frustration, and learning curves. But those systems were predictable, neutral, and consistent across settings.

Predictability

With spreadsheets, databases, or email systems, the same input always produced the same output. That determinism built trust; people learned by repetition and mastery. GenAI breaks that assumption. The same prompt can yield wildly different answers. Even the same model can shift as context or parameters change. To most users, that violates their mental model of what a “tool” is supposed to do. It’s not resistance to change—it’s cognitive dissonance.

Neutrality

Earlier technologies didn’t talk back, mimic tone, or reshape our words. GenAI does. It collaborates, critiques, even empathizes. Neutrality is gone, and that means governance has to expand beyond data privacy into psychological boundaries. The relationship between human and tool has become conversational—and that changes everything.

Fluency and Currency

Older tools showed their age. In a web search, you could see a date and judge how fresh or stale a source was. In GenAI summaries, that temporal context is often lost. The system sounds current and confident even when it’s recalling something outdated. This “false fluency” can create misplaced trust, where tone overrides truth.

Consistency Between Home and Work

Most previous tools behaved the same everywhere. The typewriter you used for homework worked the same as the one in your parents’ office. The same went for calculators, box cutters, and browsers. But GenAI differs: workplace systems are constrained by governance, while personal systems are tuned for creativity and speed. Using both can feel like handling two box cutters that look identical but cut in opposite ways. It’s not just confusing—it’s unsafe if you don’t know which one you’re holding.

We won’t fully unpack this distinction here (we will in another article), but the key idea is that users now live in two AI realities—one personal and one professional—and must navigate the gap carefully.

So when leaders say “just push adoption,” they miss the point. Predictability and neutrality are gone. We’re asking people to trust a partner, not just learn a tool.

---

Admitting the Concern Is the First Step

I’m not anti-GenAI. I believe it offers tremendous value—when governed transparently and integrated with human judgment. But until that governance is explained (and ideally proven), how can employees feel safe? They weren’t hired to be prompt engineers; they were hired to do their jobs. Suddenly, that path runs through an unpredictable system.

Fortunately, many organizations offer Emotional Well-being resources, such as counseling services, mindfulness programs, or empathy training cohorts. This is a good time to remind everyone that these resources are available and worth using when navigating AI-driven change. However, it’s essential to emphasize that at no time should anyone suggest or rely on AI systems for therapy or mental health support—those conversations belong with trained professionals and confidential human resources.

---

Leadership and Adoption Gaps

Executives and line employees often have very different use cases for GenAI. The C‑suite and their support staff rely on it for analysis, strategy, and communication efficiency, while line workers interact with it for task completion, documentation, or support. These differences can explain why leadership sometimes seems to sprint ahead—it’s not always arrogance, but distinct use cases and incentives.

Still, this creates a gap. When leadership moves faster than the workforce, they risk confusing motion with progress. Without shared understanding and pacing, adoption becomes fragmented.

It’s also common for employees to receive new AI tools without a clear explanation of their purpose or value. If the connection to day‑to‑day work isn’t apparent, resistance naturally follows.

To close this gap, organizations must bridge these layers—aligning incentives, training styles, and use‑case clarity—so everyone moves together.

---

Management Readiness: The Missing Training Layer

Many leaders have not been trained for this kind of change. They know how to manage a system rollout—not socio-technical volatility.

Traditional change-management playbooks assume resistance is temporary and rational. But with GenAI, resistance can be ethical, existential, or identity-based. Managers need new skills:

  • How to tell when fear = misinformation vs. when fear = boundary violation.
  • How to discuss hallucination risk without undermining confidence.
  • How to balance experimentation with psychological safety.

Before we push employees to adapt to GenAI, we may need to train managers to lead through ambiguity. They’re being asked to navigate emotional terrain with old maps.

---

The Sociotechnical, Ethical & Human Learning Dimension

In the rush to roll out GenAI, some companies have forgotten that people learn differently. There's no one-size-fits-all training strategy. Some employees need step-by-step visual demonstrations; others learn through experimentation or discussion. When training assumes everyone learns the same way, it leaves people behind and slows adoption.

That variation also ties to equity, trust, and bias—factors often overlooked in technical deployments. Differences in background, language, and learning style shape how comfortable someone feels engaging with AI tools. Designing multiple paths to fluency isn’t an indulgence—it’s an adoption accelerator.

---

The Sociotechnical & Ethical Dimension

Not every objection is technical. Some employees are opting out for ethical reasons: environmental impact, bias, misinformation, or data exploitation. That’s not anti-tech sentiment—that’s values-based dissent. Ignoring it only deepens mistrust.

Companies should acknowledge these objections the same way they would sustainability or DEI concerns—with transparency, offsets, and opportunities to participate in policy design. Ethical participation builds trust.

---

Measuring How It Felt to Users

Adoption metrics tell us if people are using a tool. But they don’t tell us how it felt to use it. The human side can be measured too:

DimensionPossible MeasurementWhy It Matters
Confidence“After using this AI tool, how confident are you in the result?” (1–5)Tracks trust and perceived reliability
Cognitive Load“Did this tool make your task easier or harder to think about?”Reveals design and clarity issues
Identity Impact“Did you feel replaced, assisted, or amplified?”Captures perceived agency
Emotional ToneQuick check-ins (emoji scale) post-interactionFlags emotional fatigue early
Ethical Comfort“Did using this tool align with your values?”Detects friction with ethics or sustainability

These don’t need to be full HR surveys—they can be light-touch feedback loops built into pilots or retrospectives. The point is to pair adoption data with emotional data.

---

Honest Conversations as Risk Mitigation

The “push everyone to use GenAI” mindset may have good intentions, but it risks ignoring people’s humanity. Our brains, our emotions, and our sense of authorship are all in play.

If we begin by acknowledging the concerns—yes, people are scared; yes, there have been horror stories; yes, they deserve more than “it’s just another tool”—then we can build rollouts that are mindful, transparent, and safe.

Rollouts with guardrails. Rollouts that respect identity. Rollouts where well-being is treated as equal to productivity.

We don’t just need to make GenAI work. We need to make ourselves work with GenAI, not despite it.

---

Voices We Are Listening To

We’re not speaking for anyone here. Instead, we’re listening—closely—to a growing set of thoughtful voices who are helping shape this conversation:

  • Researchers and Practitioners exploring safe AI adoption and human–AI collaboration.
  • Ethicists and Psychologists studying trust, burnout, and emotional safety.
  • Workplace Leaders experimenting with governance and transparent rollout frameworks.
  • Employees across industries who are sharing lived experiences and cautionary tales.

Each offers a piece of the larger picture: what it means to use technology that learns with us. The goal isn’t to replace human wisdom—it’s to make sure we keep hearing it.

---

Stay curious. Stay kind. Stay human.