Don’t Make Customers Do These Seven Things They Hate

Don't Make Customers Do These Seven Things They Hate

GUEST POST from Shep Hyken

Recently, I had an experience with a company and thought, “I hate this … Why do they make me do this?” This question wasn’t because of curiosity. No, I was thinking that this is something other customers must hate as well, but they make them do it anyway. So, I started a personal brainstorming session to list various processes, requirements, policies, rules, and more that cause customers to question why they continue to do business with these companies. Of course, my mind immediately went to customer service and experience issues, but there’s much more. With that in mind, here are seven practices, steps, processes, and policies that customers hate, but companies make them do it anyway.

Customers Hate:

  1. To Wait – Long hold times and long lines are frustrating and send negative messages, such as the customer’s time isn’t valued or the company is understaffed.
  2. Repeating Anything – Calling customer support and being passed around to different people, having to repeat your story again and again, isn’t fun. Nor is filling out forms that repeat the information you’ve already filled out on previous forms.
  3. Finding Hidden Fees – A stated price should be the price – with no extra fees. I recently checked into a hotel. They told me I had a $30 food and beverage credit as part of my stay – a nice surprise. Upon checking out, I noticed a $30 charge referred to as a “Destination Fee.” I asked about it, and the clerk said it was to cover the $30 food and beverage credit.
  4. Filling Out Bad Surveys – Customers are learning to dislike surveys, especially if they are long. There are right and wrong ways to do surveys. And a bad survey shouldn’t be the last thing a customer experiences when doing business with you.
  5. Listening to Complicated Phone Options – If you’ve called a company and been told to “listen to the following as our options have changed,” so you listen to the many options, and once you choose one, there are even more options … Well, I think you get the picture. There’s better technology to get the customer to the right person or the information they need.
  6. Annoying Pop-Up Windows – If you’ve been on a website and are reading information or an article and pesky pop-up windows keep interrupting you with irrelevant messages and advertising, you’re a victim of annoying pop-up windows.
  7. Anything that Requires Unnecessary Effort – Maybe you have a simple request or question. Why should it take a long time to fill out forms, answer unnecessary questions or more to get an answer?

There is a theme to this list. All of these imply the company doesn’t respect the customer’s time, energy, and effort. The goal should be the opposite: to respect and value your customer’s time, energy, and effort. Don’t create friction and put customers through anything more than necessary to get them what they want. In short, have a goal to be the easiest company to do business with. If you’re serious about it, you’ll find ways to eliminate and mitigate friction. And this list is far from complete. There are many, many other things customers hate doing.

So, here’s your assignment. Sit down with your team and brainstorm all the things they hate to do when doing business with any company. Then, ask what they think customers might hate about doing business with you. This can be processes, steps, policies, and more. Once you have the list, you know what to do. Eliminate all that makes doing business with you painful – or at least make some of the less painful. Don’t make your customers do things they hate doing!

Image Credits: Shep Hyken, Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Ongoing Innovation War Between Hackers and Cybersecurity Firms

Last Updated: October 15, 2025 at 8:36PM PDT

The Ongoing Innovation War Between Hackers and Cybersecurity Firms

GUEST POST from Art Inteligencia

In the world of change and innovation, we often celebrate disruptive breakthroughs — the new product, the elegant service, the streamlined process. But there is a parallel, constant, and far more existential conflict that drives more immediate innovation than any market force: the Innovation War between cyber defenders and adversaries. This conflict isn’t just a cat-and-mouse game; it is a Vicious Cycle of Creative Destruction where every defensive breakthrough creates a target for a new offensive tactic, and every successful hack mandates a fundamental reinvention of the defense at firms like F5 and CrowdStrike. As a human-centered change leader, I find this battleground crucial because its friction dictates the speed of digital progress and, more importantly, the erosion or restoration of citizen and customer trust.

We’ve moved past the era of simple financial hacks. Today’s sophisticated adversaries — nation-states, organized crime syndicates, and activist groups — target the supply chain of trust itself. Their strategies are now turbocharged by Generative AI, allowing for the automated creation of zero-day exploits and hyper-realistic phishing campaigns, fundamentally accelerating the attack lifecycle. This forces cybersecurity firms to innovate in response, focusing on achieving Active Cyber Resilience — the ability to not only withstand attacks but to learn, adapt, and operate continuously even while under fire. The human cost of failure — loss of privacy, psychological distress from disruption, and decreased public faith in institutions — is the real metric of this war.

The Three Phases of Cyber Innovation

The defensive innovation cycle, driven by adversary pressure, can be broken down into three phases:

  • 1. The Breach as Discovery (The Hack): An adversary finds a zero-day vulnerability or exploits a systemic weakness. The hack itself is the ultimate proof-of-concept, revealing a blind spot that internal R&D teams failed to predict. This painful discovery is the genesis of new innovation.
  • 2. The Race to Resilience (The Fix): Cybersecurity firms immediately dedicate immense resources — often leveraging AI and automation for rapid detection and response — to patch the vulnerability, not just technically, but systematically. This results in the rapid development of new threat intelligence, monitoring tools, and architectural changes.
  • 3. The Shift in Paradigm (The Reinvention): Over time, repeated attacks exploiting similar vectors force a foundational change in design philosophy. The innovation becomes less about the patch and more about a new, more secure default state. We transition from building walls to implementing Zero Trust principles, treating every user and connection as potentially hostile.

“In cybersecurity, your adversaries are your involuntary R&D partners. They expose your weakness, forcing you to innovate beyond your comfort zone and into your next generation of defense.” — Frank Hersey


Case Study 1: F5 Networks and the Supply Chain of Trust

The Attack:

F5 Networks, whose BIG-IP products are central to application delivery and security for governments and major corporations globally, was breached by a suspected nation-state actor. The attackers reportedly stole proprietary BIG-IP source code and details on undisclosed security vulnerabilities that F5 was internally tracking.

The Innovation Mandate:

This was an attack on the supply chain of security itself. The theft provides adversaries with a blueprint for crafting highly tailored, future exploits that target F5’s massive client base. The innovation challenge for F5 and the entire industry shifts from simply patching products to fundamentally rethinking their Software Development Lifecycle (SDLC). This demands a massive leap in threat intelligence integration, secure coding practices, and isolating development environments from corporate networks to prevent future compromise of the IP that protects the world.

The Broader Impact:

The F5 breach compels every organization to adopt an unprecedented level of vendor risk management. It drives innovation in how infrastructure is secured, shifting the paradigm from trusting the vendor’s product to verifying the vendor’s integrity and securing the entire delivery pipeline.


Case Study 2: Airport Public Address (PA) System Hacks

The Attack:

Hackers gained unauthorized access to the Public Address (PA) systems and Flight Information Display Screens (FIDS) at various airports (e.g., in Canada and the US). They used these systems to broadcast political and disruptive messages, causing passenger confusion, flight delays, and the immediate deployment of emergency protocols.

The Innovation Mandate:

These attacks were not financially motivated, but aimed at disruption and psychological impact — exploiting the human fear factor. The vulnerability often lay in a seemingly innocuous area: a cloud-based, third-party software provider for the PA system. The innovation mandate here is a change in architectural design philosophy. Security teams must discard the concept of “low-value” systems. They must implement micro-segmentation to isolate all operational technology (OT) and critical public-facing systems from the corporate network. Furthermore, it forces an innovation in physical-digital security convergence, requiring security protocols to manage and authenticate the content being pushed to public-facing devices, treating text-to-speech APIs with the same scrutiny as a financial transaction. The priority shifts to minimizing public and maximizing continuity.

The Broader Impact:

The PA system hack highlights the critical need for digital humility
. Every connected device, from the smart thermostat to the public announcement system, is an attack vector. The innovation is moving security from the data center floor to the terminal wall, reinforcing that the human-centered goal is continuity and maintaining public trust.


Conclusion: The Innovation Imperative

The war between hackers and cybersecurity firms is relentless, but it is ultimately a net positive for innovation, albeit a brutally expensive and high-stakes one. Each successful attack provides the industry with a blueprint for a more resilient, better-designed future.

For organizational leaders, the imperative is clear: stop viewing cybersecurity as a cost center and start treating it as the foundational innovation platform. Your investment in security dictates your speed and trust in the market. Adopt the mindset of Continuous Improvement and Adaptation. Leaders must mandate a Zero Trust roadmap and treat security talent as mission-critical R&D personnel. The speed and quality of your future products will depend not just on your R&D teams, but on how quickly your security teams can learn from the enemy’s last move. In the digital economy, cyber resilience is the ultimate competitive differentiator.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Three Reasons You Are Not Happy at Work

And What to Do to Become as Happy as You Could Be

Three Reasons You Are Not Happy at Work

GUEST POST from Stefan Lindegaard

Most people spend years in jobs that feel fine. Not great, not terrible – just fine. But fine isn’t the goal. You deserve to do work that energizes you, challenges you, and gives you a sense of purpose.

Yet too many professionals stay stuck. Why? Because they fall into one (or more) of these three traps:

1. You’ve Let Work Happen to You

The problem: If your career feels like a series of random events rather than intentional choices, it’s because you’ve been reacting instead of leading. Maybe you took the first job that paid well, accepted promotions without questioning whether they aligned with what you wanted, or stayed in a role simply because it was comfortable.

The fix: Take ownership of your career. What do you actually want from your work? More impact? More autonomy? A new challenge? Stop waiting for opportunities to fall into your lap and start actively shaping your path. Schedule time this week to reflect, map out your ideal work life, and make a move toward it.

2. You’re Valuing Stability Over Growth

The problem: If your job is predictable but uninspiring, you might have traded growth for comfort. Sure, stability feels safe, but it comes at a cost – boredom, disengagement, and a slow decline in motivation.

The fix: Push yourself out of autopilot. Challenge yourself to take on a stretch project, learn a new skill, or initiate a conversation about expanding your role. Growth is what fuels long-term satisfaction – without it, even the best job will start to feel dull.

3. You’re Waiting for the ‘Perfect’ Job Instead of Making the Most of Where You Are

The problem: Many people think happiness at work comes from finding the right job or employer. But job satisfaction is not just about where you work – it’s about how you work. If you’re constantly waiting for a better company, a better boss, or a better opportunity, you might miss the chance to make your current role more fulfilling.

The fix: Find ways to bring more purpose and energy into your day now. Connect with colleagues who inspire you. Start a project that excites you. Look for small ways to align your work with what matters to you. The next big move will come – but don’t let the wait stop you from enjoying today.

Happiness at Work Isn’t Luck. It’s a Choice!

You don’t need a new job to feel more engaged, fulfilled, or challenged. You need:

  • A clear direction for where you want to go
  • A commitment to continuous growth
  • A proactive approach to shaping your experience

Are you leading your work life or just letting it happen to you? The choice is yours.

Image Credit: Stefan Lindegaard

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Making Decisions in Uncertainty

This 25-Year-Old Tool Actually Works

Making Decisions in Uncertainty

GUEST POST from Robyn Bolton

Just as we got used to VUCA (volatile, uncertain, complex, ambiguous) futurists now claim “the world is BANI now.”  BANI (brittle, anxious, nonlinear, incomprehensible) is much worse than VUCA and reflects “the fractured, unpredictable state of the modern world.”

Not to get too Gen X on the futurists who coined and are spreading this term but…shut up.

Is the world fractured and unpredictable? Yes.

Does it feel brittle? Are we more anxious than ever? Are things changing at exponential speed, requiring nonlinear responses? Does the world feel incomprehensible? Yes, to all.

Naming a problem is the first step in solving it. The second step is falling in love with the problem so that we become laser focused on solving it. BANI does the first but fails at the second. It wallows in the problem without proposing a path forward. And as the sign says, “Ain’t nobody got time for this.”

(Re)Introducing the Cynefin Framework

The Cynefin framework recognizes that leadership and problem-solving must be contextual to be effective. Using the Welsh word for “habitat,” the framework is a tool to understand and name the context of a situation and identify the approaches best suited for managing or solving the situation.

It’s grounded in the idea that every context – situation, challenge, problem, opportunity – exists somewhere on a spectrum between Ordered and Unordered. At the Ordered end of the spectrum, cause and affect are obvious and immediate and the path forward is based on objective, immutable facts. Unordered contexts, however, have no obvious or immediate relationship between cause and effect and moving forward requires people to recognize patterns as they emerge.

Both VUCA and BANI point out the obvious – we’re spending more time on the Unordered end of the spectrum than ever. Unlike the acronyms, Cynefin helps leaders decide and act.

Five Contexts, Five Ways Forward

The Cynefin framework identifies five contexts, each with its own best practices for making decisions and progress.

On the Ordered end of the spectrum:

  • Simple contexts are characterized by stability and obvious and undisputed right answers. Here, patterns repeat, and events are consistent. This is where leaders rely on best practices to inform decisions and delegation, and direct communication to move their teams forward.
  • Complicated contexts have many possible right answers and the relationship between cause and effect isn’t known but can be discovered. Here, leaders need to rely on diverse expertise and be particularly attuned to conflicting advice and novel ideas to avoid making decisions based on outdated experience.

On the Unordered end of the spectrum:

  • Complex contexts are filled with unknown unknowns, many competing ideas, and unpredictable cause and effects. The most effective leadership approach in this context is one that is deeply uncomfortable for most leaders but familiar to innovators – letting patterns emerge. Using small-scale experiments and high levels of collaboration, diversity, and dissent, leaders can accelerate pattern-recognition and place smart bets.
  • Chaos are contexts fraught with tension. There are no right answers or clear cause and effect. There are too many decisions to make and not enough time. Here, leaders often freeze or make big bold decisions. Neither is wise. Instead, leaders need to think like emergency responders and rapidly response to re-establish order where possible to bring the situation into a Complex state, rather than trying to solve everything at once.

The final context is Disorder. Here leaders argue, multiple perspectives fight for dominance, and the organization is divided into fractions. Resolution requires breaking the context down into smaller parts that fit one of the four previous contexts and addressing them accordingly.

The Only Way Out is Through

Our VUCA/BANI world isn’t going to get any simpler or easier. And fighting it, freezing, or fleeing isn’t going to solve anything. Organizations need leaders with the courage to move forward and the wisdom and flexibility to do so in a way that is contextually appropriate. Cynefin is their map.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Must Accept That People Are Irrational

You Must Accept That People Are Irrational

GUEST POST from Greg Satell

For decades, economists have been obsessed with the idea of “enlightened self-interest,” building elaborate models based on the assumption that people make rational choices. Business and political leaders have used these models to shape competitive strategies, compensation, tax policies and social services among other things.

It’s clear that the real world is far more complex than that. Consider the prisoner’s dilemma, a famous thought experiment in which individuals acting in their self-interest make everyone worse off. In a wide array of real world and experimental contexts, people will cooperate for the greater good rather than pursue pure self-interest.

We are wired to cooperate as well as to compete. Identity and dignity will guide our actions even more than the prospect for loss or gain. While business schools have trained generations of managers to assume that they can optimize results by designing incentives, the truth is that leaders that can forge a sense of shared identity and purpose have the advantage.

Overcoming The Prisoner’s Dilemma

John von Neumann was a frustrated poker player. Despite having one of the best mathematical minds in history that could probably calculate the odds better than anyone on earth, he couldn’t tell whether other players were bluffing or not. It was his failure at poker that led him to create game theory, which calculates the strategies of other players.

As the field developed, it was expanded to include cooperative games in which players could choose to collaborate and even form coalitions with each other. That led researchers at RAND to create the prisoner’s dilemma, in which two suspects are being interrogated separately and each offered a reduced sentence to confess.

Prisoner's Dilemma

Here’s how it works: If both prisoners cooperate with each other and neither confesses, they each get one year in prison on a lesser charge. If one confesses, he gets off scot-free, while his partner gets 5 years. If they both rat each other out, then they get three years each—collectively the worst outcome of all.

Notice how from a rational viewpoint, the best strategy is to defect. No matter what one guy does, the other one is better off ratting him out. If both pursue self-interest, they are made worse off. It’s a frustrating problem. Game theorists call it a Nash equilibrium—one in which nobody can improve their position by unilateral move. In theory, you’re basically stuck.

Yet in a wide variety of real-world contexts, ranging from the survival strategies of guppies to military alliances, cooperation is credibly maintained. In fact, there are a number of strategies that have proved successful in overcoming the prisoner’s dilemma. One, called tit-for-tat, relies on credible punishments for defections. Even more effective, however, is building a culture of shared purpose and trust.

Kin Selection And Identity

Evolutionary psychology is a field very similar to game theory. It employs mathematical models to explain what types of behaviors provide the best evolutionary outcomes. At first, this may seem like the utilitarian approach that economists have long-employed, but when you combine genetics with natural selection, you get some surprising answers.

Consider the concept of kin selection. From a purely selfish point of view, there is no reason for a mother to sacrifice herself for her child. However, from an evolutionary point of view, it makes perfect sense for parents to put their kids first. Groups who favor children are more likely to grow and outperform groups who don’t.

This is what Richard Dawkins meant when he called genes selfish. If we look at things from our genes’ point of view, it makes perfect sense for them to want us to sacrifice ourselves for children, who are more likely to be able to propagate our genes than we are. The effect would logically also apply to others, such as cousins, that likely carry our genes.

Researchers have also applied the concept of kin selection to other forms of identity that don’t involve genes, but ideas (also known as memes) in examples such as patriotism. When it comes to people or ideas we see as an important part of our identity, we tend to take a much more expansive view of our interests than traditional economic models would predict.

Cultures of Dignity

It’s not just identity that figures into our decisions, but dignity as well. Consider the ultimatum game. One player is given a dollar and needs to propose how to split it with another player. If the offer is accepted, both players get the agreed upon shares. If it is not accepted, neither player gets anything.

If people acted purely rationally, offers as low as a penny would be routinely accepted. After all, a penny is better than nothing. Yet decades of experiments across different cultures show that most people do not accept a penny. In fact, offers of less than 30 cents are routinely rejected as unfair because they offend people’s dignity and sense of self.

Results from ultimatum game are not uniform, but vary in different cultures and more recent research suggests why. In a study in which a similar public goods game was played it was found that cooperative—as well as punitive—behavior is contagious, spreading through three degrees of interactions, even between people who haven’t had any direct contact.

Whether we know it or not, we are constantly building ecosystems of norms that reward and punish behavior according to expectations. If we see the culture we are operating in as trusting and generous, we are much more likely to act collaboratively. However, if we see our environment as cutthroat and greedy, we’ll tend to model that behavior in the same way.

Forging Shared Identity And Shared Purpose

In an earlier age, organizations were far more hierarchical. Power rested at the top. Information flowed up, orders went down, work got done and people got paid. Incentives seemed to work. You could pay more and get more. Yet in today’s marketplace, that’s no longer tenable because the work we need done is increasingly non-routine.

That means we need people to do more than merely carry out tasks, they need to put all of their passion and creativity into their work to perform at a high-level. They need to collaborate effectively in teams and take pride in the impact their efforts produce. To achieve that at an organizational level, leaders need to shift their mindsets.

As David Burkus explained in his TED Talk, humans are prosocial. They are vastly more likely to perform when they understand and identify with who their work benefits than when they are given financial incentives or fed some grandiose vision. Evolutionary psychologists have long established that altruism is deeply embedded in our sense of tribe.

The simple truth is that we can no longer coerce people to do what we want with Rube Goldberg-like structures of carrots and sticks, but must inspire people to want what we want. Humans are not purely rational beings, responding to stimuli as if they were vending machines that spit out desired behaviors when the right buttons are pushed, but are motivated by identity and dignity more than anything else.

Leadership is not an algorithm, but a practice of creating meaning through relationships of trust in the context of a shared purpose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






AI, Cognitive Obesity and Arrested Development

AI, Cognitive Obesity and Arrested Development

GUEST POST from Pete Foley

Some of the biggest questions of our age are whether AI will ultimately benefit or hurt us, and how big its’ effect will ultimately be.

And that of course is a problem with any big, disruptive technology.  We want to anticipate how it will play out in the real world, but our forecasts are rarely very accurate, and all too often miss a lot of the more important outcomes. We often don’t anticipate it’s killer applications, how it will evolve or co-evolve with other emergent technologies, or predict all of the side effects and ‘off label’ uses that come with it.  And the bigger the potential impact new tech has, and the broader the potential applications, the harder prediction becomes.  The reality is that in virtually every case, it’s not until we set innovation free that we find its full impact, good, bad or indifferent.

Pandora’s Box

And that can of course be a sizable concern.  We have to open Pandora’s Box in order to find out what is inside, but once open, it may not be possible to close it again.   For AI, the potential scale of its impact makes this particularly risky. It also makes any meaningful regulation really difficult. We cannot regulate what we cannot accurately predict. And if we try we risk not only missing our target, but also creating unintended consequences, and distorting ‘innovation markets’ in unexpected, potentially negative ways.

So it’s not surprising there is a lot of discussion around what AI will or will not do. How will it effect jobs, the economy, security, mental health. Will it ‘pull’ a Skynet, turn rogue and destroy humanity? Will it simply replace human critical thinking to the point where it rules us by default? Or will it ultimately fizzle out to some degree, and become a tool in a society that looks a lot like today, rather than revolutionizing it?

I don’t even begin to claim to predict the future with any accuracy, for all of the reasons mentioned above. But as a way to illustrate how complex an issue this is, I’d like to discuss a few less talked about scenarios.

1.  Less obvious issues:  Obviously AI comes with potential for enormous benefits and commensurate problems.  It’s likely to trigger an arms race between ‘good’ and ‘bad’ applications, and that of itself will likely be a moving target.  An obvious, oft discussed potential issue is of course the ‘Terminator Scenario’ mentioned above.  That’s not completely far fetched, especially with recent developments in AI self preservation and scheming that I’ll touch on later. But there are plenty of other potential, if less extreme pitfalls, many of which involve AI amplifying and empowering bad behavior by humans.  The speed and agility AI hands to hackers, hostile governments, black-hats, terrorists and organized crime vastly enhanced capability for attacks on infrastructure, mass fraud or worse. And perhaps more concerning, there’s the potential for AI to democratize cyber crime, and make it accessible to a large number of ‘petty’ criminals who until now have lacked resources to engage in this area. And when the crime base expands, so does the victim base. Organizations or individuals who were too small to be targeted for ransomware when it took huge resources to create, will presumably become more attractive targets as AI allows similar code to be built in hours by people who possess limited coding skills.

And all of this of course adds another regulation challenge. The last thing we want to do is slow legitimate AI development via legislation, while giving free reign to illegitimate users, who presumably will be far less likely to follow regulations. If the arms race mentioned above occurs, the last thing we want to do is unintentionally tip the advantage to the bad guys!

Social Impacts

But AI also has the potential to be disruptive in more subtle ways.  If the internet has taught us anything, it is that how the general public adopts technology, and how big tech monetizes matter a lot. But this is hard to predict.  Some of the Internet’s biggest negative impacts have derived from largely unanticipated damage to our social fabric.  We are still wrestling with its impact on social isolation, mental health, cognitive development and our vital implicit skill-set. To the last point, simply deferring mental tasks to phones and computers means some cognitive muscles lack exercise, and atrophy, while reduction in human to human interactions depreciate our emotion and social intelligence.

1. Cognitive Obesity  The human brain evolved over tens of thousands, arguable millions of years (depending upon where in you start measuring our hominid history).  But 99% of that evolution was characterized by slow change, and occurred in the context of limited resources, limited access to information, and relatively small social groups.  Today, as the rate of technological innovation explodes, our environment is vastly different from the one our brain evolved to deal with.  And that gap between us and our environment is widening rapidly, as the world is evolving far faster than our biology.  Of course, as mentioned above, the nurture part of our cognitive development does change with changing context, so we do course correct to some degree, but our core DNA cannot, and that has consequences.

Take the current ‘obesity epidemic’.  We evolved to leverage limited food resources, and to maximize opportunities to stock up calories when they occurred.  But today, faced with near infinite availability of food, we struggle to control our scarcity instincts. As a society, we eat far too much, with all of the health issues that brings with it. Even when we are cognitively aware of the dangers of overeating, we find it difficult to resist our implicit instincts to gorge on more food than we need.  The analogy to information is fairly obvious. The internet brought us near infinite access to information and ‘social connections’.  We’ve already seen the negative impact this can have, contributing to societal polarization, loss of social skills, weakened emotional intelligence, isolation, mental health ‘epidemics’ and much more. It’s not hard to envisage these issues growing as AI increases the power of the internet, while also amplifying the seduction of virtual environments.  Will we therefore see a cognitive obesity epidemic as our brain simply isn’t adapted to deal with near infinite resources? Instead of AI turning us all into hyper productive geniuses, will we simply gorge on less productive content, be it cat videos, porn or manipulative but appealing memes and misinformation? Instead of it acting as an intelligence enhancer, will it instead accelerate a dystopian Brave New World, where massive data centers gorge on our common natural resources primarily to create trivial entertainment?

2. Amplified Intelligence.  Even in the unlikely event that access to AI is entirely democratic, it’s guaranteed that its benefits will not be. Some will leverage it far more effectively than others, creating significant risk of accelerating social disparity.  While many will likely gorge unproductively as described above, others will be more disciplined, more focused and hence secure more advantage.  To return to the obesity analogy, It’s well documented that obesity is far more prevalent in lower income groups. It’s hard not to envisage that productive leverage of AI will follow a similar pattern, widening disparities within and between societies, with all of the issues and social instability that comes with that.

3. Arrested Development.  We all know that ultimately we are products of both nature and nurture. As mentioned earlier, our DNA evolves slowly over time, but how it is expressed in individuals is impacted by current or context.  Humans possess enormous cognitive plasticity, and can adapt and change very quickly to different environments.  It’s arguably our biggest ‘blessing’, but can also be a curse, especially when that environment is changing so quickly.

The brain is analogous to a muscle, in that the parts we exercise expand or sharpen, and the parts we don’t atrophy.    As we defer more and more tasks to AI, it’s almost certain that we’ll become less capable in those areas.  At one level, that may not matter. Being weaker at math or grammar is relatively minor if our phones can act as a surrogate, all of my personal issues with autocorrect notwithstanding.

But a bigger potential issue is the erosion of causal reasoning.  Critical thinking requires understanding of underlying mechanisms.  But when infinite information is available at a swipe of a finger, it becomes all too easy to become a ‘headline thinker’, and unconsciously fail to penetrate problems with sufficient depth.

That risks what Art Markman, a psychologist at UT, and mentor and friend, used to call the ‘illusion of understanding’.  We may think we know how something works, but often find that knowledge is superficial, or at least incomplete, when we actually need it.   Whether its fixing a toilet, changing a tire, resetting a fuse, or unblocking a sink, often the need to actually perform a task reveals a lack in deep, causal knowledge.   This often doesn’t matter until it does in home improvement contexts, but at least we get a clear signal when we discover we need to rush to YouTube to fix that leaking toilet!

This has implications that go far beyond home improvement, and is one factor helping to tear our social fabric apart.   We only have to browse the internet to find people with passionate, but often opposing views on a wide variety of often controversial topics. It could be interest rates, Federal budgets, immigration, vaccine policy, healthcare strategy, or a dozen others. But all too often, the passion is not matched by deep causal knowledge.  In reality, these are all extremely complex topics with multiple competing and interdependent variables.  And at risk of triggering hate mail, few if any of them have easy, conclusive answers.  This is not physics, where we can plug numbers into an equation and it spits out a single, unambiguous solution.  The reality is that complex, multi-dimensional problems often have multiple, often competing partial solutions, and optimum outcomes usually require trade offs.  Unfortunately few of us really have the time to assimilate the expertise and causal knowledge to have truly informed and unambiguous answers to most, if not all of these difficult problems.

And worse, AI also helps the ‘bad guys’. It enables unscrupulous parties to manipulate us for their own benefit, via memes, selective information and misinformation that are often designed to make us think we understand complex problems far better than we really do. As we increasingly rely on input from AI, this will inevitable get worse. The internet and social media has already contributed to unprecedented social division and nefarious financial rimes.   Will AI amplify this further?

This problem is not limited to complex social challenges. The danger is that for ALL problems, the internet, and now AI, allows us to create the illusion for ourselves that we understand complex systems far more deeply than we really do.  That in turn risks us becoming less effective problem solvers and innovators. Deep causal knowledge is often critical for innovating or solving difficult problems.  But in a world where we can access answers to questions so quickly and easily, the risk is that we don’t penetrate topics as deeply. I personally recall doing literature searches before starting a project. It was often tedious, time consuming and boring. Exactly the types of task AI is perfect for. But that tedious process inevitably built my knowledge of the space I was moving into, and often proved valuable when we hit problems later in the project. If we now defer this task to AI, even in part, this reduces depth of understanding. And in in complex systems or theoretic problem solving, will often lack the unambiguous signal that usually tells us our skills and knowledge are lacking when doing something relatively simple like fixing a toilet. The more we use AI, the more we risk lacking necessary depth of understanding, but often without realizing it.

Will AI become increasingly unreliable?

We are seeing AI develop the capability to lie, together with a growing propensity to cover it’s tracks when it does so. The AI community call it ’scheming’, but in reality it’s fundamentally lying.  https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/?_bhlid=6a932f218e6ebc041edc62ebbff4f40bb73e9b14. We know from the beginning we’ve faced situations where AI makes mistakes.  And as I discussed recently, the risks associated with that are amplified because of it’s increasingly (super)human or oracle-like interface creating an illusion of omnipotence.

But now it appears to be increasingly developing properties that mirror self preservation.  A few weeks ago there were reports of difficulties in getting AI’s to shut themselves down, and even of AI’s using defensive blackmail when so threatened. Now we are seeing reports of AI’s deliberately trying to hide their mistakes.  And perhaps worse, concerns that attempts to fix this may simply “teach the model to become better at hiding its deceptive behavior”, or in other words, become a better liar.

If we are already in an arms race with an entity to keep it honest, and put our interests above its own, given it’s vastly superior processing power and speed, it may be a race we’ve already lost.  That may sound ‘doomsday-like’, but that doesn’t make it any less possible. And keep in mind, much of the Doomsday projections around AI focus on a ’singularity event’ when AI suddenly becomes self aware. That assumes AI awareness and consciousness will be similar to human, and forces a ‘birth’ analogy onto the technology. However, recent examples of self preservation and dishonesty maybe hint at a longer, more complex transition, some of which may have already started.

How big will the impact of AI be?

I think we all assume that AI’s impact will be profound. After all,  it’s still in its infancy, and is already finding it’s way into all walks of life.  But what if we are wrong, or at least overestimating its impact?  Just to play Devils Advocate, we humans do have a history of over-estimating both the speed and impact of technology driven change.

Remember the unfounded (in hindsight) panic around Y2K?  Or when I was growing up, we all thought 2025 would be full of people whizzing around using personal jet-packs.  In the 60’s and 70’s we were all pretty convinced we were facing nuclear Armageddon. One of the greatest movies of all time, 2001, co-written by inventor and futurist Arthur C. Clark, had us voyaging to Jupiter 24 years ago!  Then there is the great horse manure crisis of 1894. At that time, London was growing rapidly, and literally becoming buried in horse manure.  The London Times predicted that in 50 years all of London would be buried under 9 feet of poop. In 1898 the first global urban planning conference could find no solution, concluding that civilization was doomed. But London, and many other cities received salvation from an unexpected quarter. Henry Ford invented the motor car, which surreptitiously saved the day.  It was not a designed solution for the manure problem, and nobody saw it coming as a solution to that problem. But nonetheless, it’s yet another example of our inability to see the future in all of it’s glorious complexity, and for our predictions to screw towards worse case scenarios and/or hyperbole.

Change Aversion:

That doesn’t of course mean that AI will not have a profound impact. But lot’s of factors could potentially slow down, or reduce its effects.  Not least of these is human nature. Humans possess a profound resistance to change.  For sure, we are curious, and the new and innovative holds great appeal.  That curiosity is a key reason as to why humans now dominate virtually every ecological niche on our planet.   But we are also a bit schizophrenic, in that we love both change and stability and consistency at the same time.  Our brains have limited capacity, especially for thinking about and learning new stuff.  For a majority of our daily activities, we therefore rely on habits, rituals, and automatic behaviors to get us through without using that limited higher cognitive capacity. We can drive, or type, or do parts of our job without really thinking about it. This ‘implicit’ mental processing frees up our conscious brain to manage the new or unexpected.  But as technology like AI accelerates, a couple of things could happen.  One is that as our cognitive capacity gets overloaded, and we unconsciously resist it.  Instead of using the source of all human knowledge for deep self improvement, we instead immerse ourselves in less cognitively challenging content such as social media.

Or, as mentioned earlier, we increasingly lose causal understanding of our world, and do so without realizing it.   Why use our limited thinking capacity for tasks when it is quicker, easier, and arguably more accurate to defer to an AI. But lack of causal understanding seriously inhibits critical thinking and problem solving.  As AI gets smarter, there is a real risk that we as a society become dumber, or at least less innovative and creative.

Our Predictions are Wrong.

If history teaches us anything, most, if not all of the sage and learned predictions about AI will be mostly wrong. There is no denying that it is already assimilating into virtually every area of human society.  Finance, healthcare, medicine, science, economics, logistics, education etc.  And it’s a snooze and you lose scenario, and in many fields of human endeavor, we have little choice.  Fail to embrace the upside of AI and we get left behind.

That much power in things that can think so much faster than us, that may be developing self-interest, if not self awareness, that has no apparent moral framework, and is in danger of becoming an expert liar, is certainly quite sobering.

The Doomsday Mindset.

As suggested above, loss aversion and other biases drive us to focus on the downside of change.   It’s a bias that makes evolutionary sense, and helped keep our ancestors alive long enough to breed and become our ancestors. But remember, that bias is implicitly built into most, if not all of our predictions.   So there’s at least  chance that it’s impact wont be quite as good or bad as our predictions suggest

But I’m not sure we want to rely on that.  Maybe this time a Henry Ford won’t serendipitously rescue us from a giant pile of poop of our own making. But whatever happens, I think it’s a very good bet that we are in for some surprises, both good and bad. Probably the best way to deal with that is to not cling too tightly to our projections or our theories, remain agile, and follow the surprises as much, if not more than met expectations.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The AI Innovations We Really Need

The Future of Sustainable AI Data Centers and Green Algorithms

The AI Innovations We Really Need

GUEST POST from Art Inteligencia

The rise of Artificial Intelligence represents a monumental leap in human capability, yet it carries an unsustainable hidden cost. Today’s large language models (LLMs) and deep learning systems are power and water hungry behemoths. Training a single massive model can consume the energy equivalent of dozens of homes for a year, and data centers globally now demand staggering amounts of fresh water for cooling. As a human-centered change and innovation thought leader, I argue that the next great innovation in AI must not be a better algorithm, but a greener one. We must pivot from the purely computational pursuit of performance to the holistic pursuit of water and energy efficiency across the entire digital infrastructure stack. A sustainable AI infrastructure is not just an environmental mandate; it is human-centered mandate for equitable, accessible global technology. The withdrawal of Google’s latest AI data center project in Indiana this week after months of community opposition is proof of this need.

The current model of brute-force computation—throwing more GPUs and more power at the problem—is a dead end. Sustainable innovation requires targeting every element of the AI ecosystem, from the silicon up to the data center’s cooling system. This is an immediate, strategic imperative. Failure to address the environmental footprint of AI is not just an ethical lapse; it’s an economic and infrastructural vulnerability that will limit global AI deployment and adoption, leaving entire populations behind.

Strategic Innovation Across the AI Stack

True, sustainable AI innovation must be decentralized and permeate six core areas:

  1. Processors (ASICs, FPGAs, etc.): The goal is to move beyond general-purpose computing toward Domain-Specific Architecture. Custom ASICs and highly specialized FPGAs designed solely for AI inference and training, rather than repurposed hardware, offer orders of magnitude greater performance-per-watt. The shift to analog and neuromorphic computing drastically reduces the power needed for each calculation by mimicking the brain’s sparse, event-driven architecture.
  2. Algorithms: The most powerful innovation is optimization at the source. Techniques like Sparsity (running only critical parts of a model) and Quantization (reducing the numerical precision required for calculation, e.g., from 32-bit to 8-bit) can cut compute demands by over 50% with minimal loss of accuracy. We need algorithms that are trained to be inherently efficient.
  3. Cooling: The biggest drain on water resources is evaporative cooling. We must accelerate the adoption of Liquid Immersion Cooling (both single-phase and two-phase), which significantly reduces reliance on water and allows for more effective waste heat capture for repurposing (e.g., district heating).
  4. Networking and Storage: Innovations optical networking (replacing copper with fiber) and silicon photonics reduce the energy spikes for data transfer between thousands of chips. For storage, emerging non-volatile memory technologies can cut the energy consumed during frequent data retrieval and writes.
  5. Security: Encryption and decryption are computationally expensive. We need Homomorphic Encryption (HE) accelerators and specialized ASICs that can execute complex security protocols with minimal power draw. Additionally, efficient algorithms for federated learning reduce the need to move sensitive data to central, high-power centers.

“We are generating moderate incremental intelligence by wasting massive amounts of water and power. Sustainability is not a constraint on AI; it is the ultimate measure of its long-term viability.” — Braden Kelley


Case Study 1: Google’s TPU and Data Center PUE

The Challenge:

Google’s internal need for massive, hyper-efficient AI processing far outstripped the efficiency available from standard, off-the-shelf GPUs. They were running up against the physical limits of power consumption and cooling capacity in their massive fleet.

The Innovation:

Google developed the Tensor Processing Unit (TPU), a custom ASIC optimized entirely for their TensorFlow workload. The TPU achieved significantly better performance-per-watt for inference compared to conventional processors at the time of its introduction. Simultaneously, Google pioneered data center efficiency, achieving industry-leading Power Usage Effectiveness (PUE) averages near 1.1. (PUE is defined as Total Energy entering the facility divided by the Energy used by IT Equipment.)

The Impact:

This twin focus—efficient, specialized silicon paired with efficient facility management—demonstrated that energy reduction is a solvable engineering problem. The TPU allows Google to run billions of daily AI inferences using a fraction of the energy that would be required by repurposed hardware, setting a clear standard for silicon specialization and driving down the facility overhead costs.


Case Study 2: Microsoft’s Underwater Data Centers (Project Natick)

The Challenge:

Traditional data centers struggle with constant overheating, humidity, and high energy use for active, water-intensive cooling, leading to high operational and environmental costs.

The Innovation:

Microsoft’s Project Natick experimented with deploying sealed data center racks underwater. The ambient temperature of the deep ocean or a cold sea serves as a massive, free, passive heat sink. The sealed environment (filled with inert nitrogen) also eliminated the oxygen-based corrosion and humidity that cause component failures, resulting in a 8x lower failure rate than land-based centers.

The Impact:

Project Natick provides a crucial proof-of-concept for passive cooling innovation and Edge Computing. By using the natural environment for cooling, it dramatically reduces the PUE and water consumption tied to cooling towers, pushing the industry to consider geographical placement and non-mechanical cooling as core elements of sustainable design. The sealed environment also improves hardware longevity, reducing e-waste.


The Next Wave: Startups and Companies to Watch

The race for the “Green Chip” is heating up. Keep an eye on companies pioneering specialized silicon like Cerebras and Graphcore, whose large-scale architectures aim to minimize data movement—the most energy-intensive part of AI training. Startups like Submer and Iceotope are rapidly commercializing scalable liquid immersion cooling solutions, transforming the data center floor. On the algorithmic front, research labs are focusing Spiking Neural Networks (SNNs) and neuromorphic chips (like those from Intel’s Loihi project), which mimic the brain’s energy efficiency by only firing when necessary. Furthermore, the development of carbon-aware scheduling tools by startups is beginning to allow cloud users to automatically shift compute workloads to times and locations where clean, renewable energy is most abundant, attacking the power consumption problem from the software layer and offering consumers a transparent, green choice.

The Sustainable Mandate

Sustainable AI is not an optional feature; it is a design constraint for all future human-centered innovation. The shift requires organizational courage to reject the incremental path. We must move funding away from simply purchasing more conventional hardware and towards investing in these strategic innovations: domain-specific silicon, quantum-inspired algorithms, liquid cooling, and security protocols designed for minimum power draw. The true power of AI will only be realized when its environmental footprint shrinks, making it globally scalable, ethically sound, and economically viable for generations to come. Human-centered innovation demands a planet-centered infrastructure.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Need for Organizational Learning

The Need for Organizational Learning

GUEST POST from Mike Shipulski

The people within companies have development plans so they can learn new things and become more effective. There are two types of development plans – one that builds on strengths and another that shore up shortcomings. And for both types, the most important step is to acknowledge it’s important to improve. Before a plan can be created to improve on a strength, there must be recognition that something good can come from the improvement. And before there can be a plan to improve on a shortcoming, there must be recognition that there’s something missing and it needs to be improved.

And thanks to Human Resources, the whole process is ritualized. The sequence is defined, the timing is defined and the tools are defined. Everyone knows when it will happen, how it will happen and, most importantly, that it will happen. In that way, everyone knows it’s important to learn new skills for the betterment of all.

Organizational learning is altogether different and more difficult. With personal learning, it’s clear who must do the learning (the person). But with organizational learning, it’s unclear who must learn because the organization, as a whole, must learn. But we can’t really see the need for organizational learning because we get trapped in trying to fix the symptoms. Team A has a problem, so let’s fix Team A. Or, Team B has a problem, so let’s fix Team B. But those are symptoms. Real organizational learning comes when we recognize problematic themes shared by all the teams. Real organization learning comes when we realize these problems don’t result from doing things wrong, rather, they are a natural byproduct of how the company goes about its work.

The difficulty with organizational learning is not fixing the thematic problems. The difficulty is recognizing the thematic problems. When all the processes are followed and all the best practices are used, yet the same problematic symptoms arise, the problem is inherent in the foundational processes and practices. Yet, these are the processes and practices responsible for past success. It’s difficult for company leaders recognize and declare that the things that made the company successful are now the things that are holding the company back. But that’s the organizational learning that must happen.

What worked last time will work next time, as long as the competitive landscape remains constant. But when the landscape changes, what worked last time doesn’t work anymore. And this, I think, is how recipes responsible for past success can, over time, begin to show cracks and create these systematic problems that are so difficult to see.

The best way I know to recognize the need for organizational learning is to recognize changes in the competitive landscape. Once these changes are recognized, thought experiments can be run to evaluate potential impacts on how the company does business. Now that the landscape changed like this, it could stress our business model like that. Now that our competitors provide new services like this, it could create a gap in our capabilities like that.

Organizational learning occurs when the right leaders feel the problems. Fight the urge to fix the problems. Instead, create the causes and conditions for the right leaders to recognize they have a real problem on their hands.

Image credit: 1 of 950+ FREE quote slides available at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Secret to Endless Customers

The Secret to Endless Customers

GUEST POST from Shep Hyken

Marcus Sheridan owns a pool and spa manufacturing company in Virginia — not a very sexy business, unless you consider the final product, which is often surrounded by beautiful people. What he did to stand out in a marketplace filled with competition is a masterclass in how to get noticed and, more importantly, get business. His most recent book, Endless Customers, is a follow-up to his bestselling book They Ask, You Answer, with updated information and new ideas that will help you build a business that has, as the title implies, endless customers.

Sheridan’s journey began in 2001 when he started a pool company with two friends. When the 2008 market collapse hit, they were on the verge of losing everything. This crisis forced them to think differently about how to reach customers. Sheridan realized that potential buyers were searching for answers to their questions, so he decided his company would become “the Wikipedia of fiberglass swimming pools.”

By brainstorming every question he’d ever received as a pool salesperson and addressing them through content online, his company’s website became the most trafficked swimming pool website in the world within just a couple of years. This approach transformed his business and became the foundation for his business philosophy.

In our interview on Amazing Business Radio, Sheridan shared what he believes is the most important strategy that businesses can use to get and keep customers, and that is to become a known and trusted brand. They must immerse themselves in what he calls the Four Pillars of a Known and Trusted Brand.

  1. Say What Others Aren’t Willing to Say: The No. 1 reason people leave websites is because they can’t find what they’re looking for — and the top information they seek is pricing. Sheridan emphasizes that businesses should openly discuss costs and pricing on their websites. While you don’t need to list exact prices, you should educate consumers about what drives costs up or down in your industry. Sheridan suggests creating a comprehensive pricing page that teaches potential customers how to buy in your industry. According to him, 90% of industries still avoid this conversation, even though it’s what customers want most.
  2. Show What Others Aren’t Willing to Show: When Sheridan’s company was manufacturing fiberglass swimming pools, it became the first to show its entire manufacturing process from start to finish through a series of videos. They were so complete that someone could literally learn how to start their own manufacturing company by watching these videos. Sheridan recognized that sharing the “secret sauce” was a level of transparency that built trust, helping to make his company the obvious choice for many customers.
  3. Sell in Ways Others Aren’t Willing to Sell: According to Sheridan, 75% of today’s buyers prefer a “seller-free sales experience.” He says, “That doesn’t mean we hate salespeople. We just don’t want to talk to them until we’re very, very, ready.” Sheridan suggests meeting customers where they are by offering self-service options on your website. For his pool and spa business, that included a price estimator solution that helped potential customers determine how much they could afford — without the pressure of talking to a salesperson.
  4. Be More Human than Others Are Willing to Be: In a world that is becoming dominated by AI and technology, showing the human side of a business is critical to a trusting business relationship. Sheridan suggests putting leaders and employees on camera. They are truly the “face of the brand.” It’s okay to use AI, just find the balance that helps you stay human in a technology-dominated world.

As we wrapped up the interview, I asked Sheridan to share his most powerful idea, and the answer goes back to a word he used several times throughout the interview: Trust. “In a time of change, we need, as businesses, constants that won’t change,” Sheridan explained. “One thing I can assure you is that in 10 years, you’re going to be in a battle for trust. It’s the one thing that binds all of us. It’s the great currency that is not going to go away. So, become that voice of trust. If you do, your organization is going to be built to last.”

And that, according to Sheridan, is how you create “endless customers.”

Image Credits: Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






How Incumbents Can React to Disruption

How Incumbents Can React to Disruption

GUEST POST from Geoffrey A. Moore

Think back a couple of years and imagine …

You are Jim Farley at Ford, with Tesla banging at the door. You are Bob Iger at Disney with Netflix pounding on the gates. You are Pat Gelsinger at Intel with Nvidia invading your turf. You are virtually every CEO in retail with Amazon Prime wreaking havoc on your customer base. So, what are you supposed to do now?

The answer I give in Zone to Win is that you have to activate the Transformation Zone. This is true, but it is a bit like saying, you have to climb a mountain. It begs the question, How?

There are five key questions executives facing potential disruption must ask:

1. When?

If you go too soon, your investors will lose patience with you and desert the ship. If you go too late, your customers will realize you’re never really going to get there, so they too, reluctantly, will depart. Basically, everybody gets that a transformation takes more than one year, and no one will give you three, so by default, when the window of opportunity to catch the next wave looks like it will close within the next two years, that’s when you want to pull the ripcord.

2. What does transformation really mean?

It means you are going to break your established financial performance covenants with your investors and drastically reduce your normal investment in your established product lines in order to throw your full weight behind launching yourself into the emerging fray. The biggest mistake executives can make at this point is to play down the severity of these actions. Believe me, they are going to show, if not this quarter, then soon, and when they do, if you have not prepared the way, your entire ecosystem of investors, partners, customers, and employees are going to feel betrayed.

3. What can you say to mitigate the consequences?

Simply put, tell the truth. The category is being disrupted. If we are to serve our customers, we need to transition our business to the new technology. This is our number one priority, we have clear milestones to measure our progress, and we plan to share this information in our earnings calls. In the meantime, we continue to support our core business and to work with our customers and partners to address their current needs as well as their future roadmaps.

4. What is the immediate goal?

The immediate goal is to neutralize the threat by getting “good enough, fast enough.” It is not to leapfrog the disruptor. It is not to break any new ground. Rather, it is simply to get included in the category as a fast follower, and by so doing to secure the continuing support of the customer base and partner ecosystem. The good news here is that customers and partners do not want to switch vendors if they can avoid it. If you show you are making decent progress against your stated milestones, most will give you the benefit of the doubt. Once you have gotten your next-generation offerings to a credible state, you can assess your opportunities to differentiate long-term—but not before.

5. In what ways do we act differently?

This is laid out in detail in the chapter on the Transformation Zone in Zone to Win. The main thing is that supporting the transformation effort is the number one priority for everyone in the enterprise every day until you have reached and passed the tipping point. Anyone who is resisting or retarding the effort needs to be counseled to change or asked to leave. That said, most people will still spend most of their time doing what they were doing before. It is just that if anyone on the transformation initiative asks anyone else for help, the person asked should do everything they can to provide that help ASAP. Executive staff meetings make the transformation initiative the number one item on the agenda for the duration of the initiative, the goal being at each session to assess current progress, remove any roadblocks, and do whatever possible to further accelerate the effort.

Conclusion

The net of all of the above is transformation is a bit like major surgery. There is a known playbook, and if you follow it, there is every reason to expect a successful outcome. But woe to anyone who gets distracted along the way or who gives up in discouragement halfway through. There is no halfway house with transformations—you’re either a caterpillar or a butterfly, there’s nothing salvageable in between.

That’s what I think. What do you think?

Image Credit: Slashgear.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.