A Fruit Gone Sour: The Demise of RIM and BlackBerry

Hey, remember BlackBerry?

In this day and age, it’s basically the smartphone equivalent of asking about digital watches or portable CD players. So it may be hard to remember that less than a decade ago, BlackBerry phones were at the technological forefront, a staple of the busy, the important, and the with-it. People joked about their BlackBerry addictions to the point where “CrackBerry” was Webster Dictionary’s 2006 Word of the Year. In 2009, Fortune magazine named RIM, the makers of BlackBerry, as the fastest growing company in the world.

Today, you may still know a BlackBerry user, but it’s probably that eccentric friend who won’t throw away their video cassettes in case the VCR makes a comeback.

Have you ever wondered what happened?

Probably not. But hey, now that I brought it up, aren’t you curious?

RIM’s 1999 BlackBerry was revolutionary. In a time when cellphones weren’t good for much beyond making calls, here was a palm-sized PDA that could send and receive e-mails from anywhere. The network was secure, the battery lasted forever, and the little QWERTY keyboard meant you could tap out a message with nearly the efficiency of typing on a computer.

For a while, everything was going right for RIM. What happened? In a word, people.

CEOs Mark Lazaridis and Jim Balsillie built a tech giant, but sadly they suffered from what is sometimes called “Founder’s syndrome.” Having scaled their way to the peak of the mountain, they failed to remember that landscapes change—especially in the fast-changing world of handheld electronics, where people on average replace their phones every two years.

On one hand, with hindsight on our side, it’s easy to condemn business leaders for failing to divine the future. On the other hand, RIM’s success caused Lazaridis and Balsillie to double down and stick their heads so far in the sand that their comments now make for surreal reading.

When PDAs in Asia began to offer color screens, Lazaridis insisted it was an impractical fad. “Do I need to read my e-mail in color?” he’s reported to have said.

“Cameraphones will be rejected by corporate users,” he stated in 2003.

In 2007, when Apple introduced a little gadget they were working on called an iPhone, Balsillie dismissed it as, “kind of one more entrant into an already very busy space with lots of choice for consumers … But in terms of a sort of a sea-change for BlackBerry, I would think that’s overstating it.”

Maybe in another company, someone might have stepped forward and delivered a wakeup call. But Lazaridis was notorious for only hiring people who thought like him. Lazaridis and Balsillie continued to insist their practical, workmanlike product had an impossible-to-beat foothold among businesspeople. How could a phone that wasted battery life on shiny new features elbow in on their territory? Who would tolerate the less user-friendly touchscreen keyboard of an iPhone?

“The most exciting mobile trend is full Qwerty keyboards,” Lazaridis said in 2008, of a feature they’d been offering for literally nine years. “I’m sorry, it really is. I’m not making this up.”

The public disagreed. The public disagreed pretty hard, as it turned out. That oh-so exciting keyboard feature became a shackle, cutting possible screen space in half and severely limiting what else the BlackBerry could offer. As more and more average consumers were enticed into the world of smartphones by the bells and whistles of the new generation, it altered the very definition of what a phone was supposed to be.

By the time even Lazaridis and Balsillie could no longer deny that change was needed, it was too late: they’d lost their edge, their voice of authority. When they finally started to offer their own touchscreens, it came with a feeling of sweaty desperation—and amazingly, their attempt at an iPad competitor didn’t even offer e-email.

At its peak, BlackBerry stock was worth $230 a share. These days, it hovers around $10. You would probably be better off using that money to buy actual blackberries, which are delicious, full of antioxidants, and much less vulnerable to corporate hubris.

What Your Grandma and Corporations Have in Common

Imagine your grandma just celebrated her 85th birthday. She’s beginning to forget things, but her doctor has reassured you that since, if given enough time, she can eventually pull the information through, it’s probably not Alzheimer’s.

You take some comfort in a recent study that suggests memory retrieval problems might not necessarily be the deterioration of synaptic connections, but more of a space issue.

Think of Grandma’s hippocampus like a library that, over time, has run out of shelving for the books. As they begin to pile up on the floor, the librarian can still find that edition of Twain’s Huckleberry Finn you’re after, but it takes a a little more time to scour all the nooks and crannies of the library to locate it. The same might be true for the hippocampus, the memory library.

In any event, modern medicine has been good to Grandma. The old family general practitioner has been replaced by a whole bevy of doctors who specialize in any number of medical fields. She’s got her heart specialist, her eye, ear, nose and throat doctor, her podiatrist, her diabetes doctor, her osteopath, and so on.

As a result, she finds herself traveling a regular circuit of doctors, each dedicated to improving the quality of her life and each taking advantage of the latest discoveries in pharmaceutical science.

Pharmaceutical science, like all science, operates on the principal of reduction theory—in essence, that the key to solving problems is to break them down into their smallest components and observe cause and effect. Molecular biology, and thus virtually every modern drug, is the result of this process. This systematic approach has literally built the technological world of modern humans.

There is one key problem with this approach. When you begin to examine complex systems like the human body, the reductionist technique begins to falter. Humans are composed of a myriad of structures that interact with each other, and depend on each other. The tangle of where one system begins and another end is difficult to understand, let alone observe.

For this reason, it makes more sense to understand a human being not as a series of mini structures or systems, but as one giant complex system. We need to think holistically.

When the heart doctor prescribes a heart medication, unless he knows what all the other speciality doctors have prescribed Grandma, and further understands the dynamic effect that might be created through the intermingling of medications, he might be setting her up for catastrophe and a trip to the ER. All of this despite his best intentions.

This is the peril of not recognizing that in a complex system, cause and effect relationships with other parts of the system can be significantly delayed and mask the dangers of your actions. Furthermore, the fact that internal organs are connected means the medication Grandma’s taken doesn’t necessarily move through her system in an isolated or linear fashion. Her heart medication might affect her heart, other medications, and/or other organs in unpredictable ways.

The effects of a medication can travel through the body like a metastasizing cancer, moving out in all directions simultaneously. The net result shows up as a cascading series of outcomes, leaving the simple, reductionist-driven ER doc in its wake.

Like Grandma’s body, today’s corporations have their own dizzying structure of interdependencies. Departments like sales, marketing, manufacturing, logistics, human resources, IT, and a slew of other departments abound, with more bound to come on the heels of new technological developments.

In many corporations, the depth of departmental interconnectivity and dependency is not completely recognized or understood, just like Grandma’s specialists don’t always understand the compounding effect of their actions in relationship to the body as a whole.

The nonlinear aspects of complex systems and delayed cause and effect loops can doom a company in the same way Grandma’s new heart medication may negatively impact her other medications.The end result can put Grandma in the ER, and a corporation on its back.


Stay tuned for Part 2. Next week we’ll learn the tricks to keeping your Grandma—and your favorite corporation—alive.

The Lowdown on Luck

“Good luck!”

It’s a common expression in our lexicon. Obviously, on a gut level we have some sense of the importance of luck—that is, until things go our way. Suddenly, we relegate luck to the cheap seats as we bask in the spotlight, prepared to take full credit for our superior decision-making.

Take the Quaker Oats Corporation, for example.

In 1983, the CEO of Quaker Oats, William Smithburg, sampled some of an up-and-coming sports drink and decided to acquire the company. That beverage was Gatorade, and it was the beginning of a goldmine for Quaker Oats.

So when Smithburg decided to buy Snapple in 1994 for $1.8 billion, he was unchallenged from within his organization. This was the Gatorade guy, after all; surely he knew what he was talking about. Media pundits disagreed, lambasting the decision before the deal was even struck.

Did Smithburg’s superior decision-making prove them wrong? Well, no.

Fast forward 3 years and Quaker Oats was desperately unloading Snapple at a loss of $1.5 billion dollars. To this day, it is widely seen as one of the worst decisions in business history— quite a legacy.

So how could Smithburg screw up so badly? How should we understand what happened at Quaker Oats, polar opposite results from the same CEO?

Nobel Laureate Daniel Kahneman has the answer. And for that answer, we must turn to the Israeli Air Force.

For a long time, the Israeli Air Force trained its pilots with the assumption that negative feedback trumped positive. After all, when a trainee pilot was punished for a botched maneuver, the next attempt tended to go better. When that same trainee executed a maneuver perfectly and received praise, their next attempt was generally not as good.

Kahneman was the first to realize that this wasn’t a case of the stick working better than the carrot. It was simple statistics at play.

We all love those magic moments where we outshine our normal capabilities. But there’s a reason your average is your average. So chances are that a better-than-usual outcome is almost certainly going to be followed by something lackluster. The opposite is true as well; if you find yourself performing much worse than usual, the odds favor an eventual upswing.

It’s an old concept in statistics. In the 19th century, Sir Frances Galton found that the children of unusually tall people tended to be a little shorter than their parents, and unusually short parents tended to have children taller than themselves. He referred to this phenomenon as “regression to the mean.”

Kahneman takes this concept beyond height and into the messier real world.

No matter how well you prepare, most enterprises involve a degree of chance. A brilliant business idea may still fail in a lousy economy. An untalented singer might still net a record deal by happening to charm the right person at the right time.

In his book Thinking, Fast and Slow, Kahneman repeatedly demonstrates that much of our lives are shaped by random events beyond our understanding or control.

Was William Smithburg’s gut decision to buy Gatorade a stroke of leadership genius? Or did he happen to taste the right stuff at the right time?

Kahneman was once asked about his definition of success. He famously replied, “Success = talent + luck, and great success = a little more talent + a lot of luck.”

In other words, luck is not some bit player. It’s an integral part of the human experience, whether we chose to acknowledge it or not. Kahneman showed luck, good or bad, pretty much guarantees that regression to the mean is always waiting to take center stage.

Transference Bias: A Tale of Bloody Wars, Baby Kings, and Bad Bosses

There was a time, and it was not so long ago, that conventional wisdom said if you were born into nobility, you possessed a set of superior traits that automatically qualified you for governance. Got royal parents? Congratulations, you’ve won the leadership lottery.

There was just one problem: the system often produced people uniquely unqualified to rule.

Consider Charles II of Spain. He came from a line of the Spanish Hapsburgs so intermarried that one ancestor appears on his family tree in 14 separate places. Charles took the throne in 1665 despite a host of physical and mental disabilities—he couldn’t chew, drooled frequently, was never really educated, and at one point it’s rumored he ordered his deceased family members dug up so he could look at them.

Consider King George IV of England, famous for his extravagant spending, love of leisure—and utterly selfish, irresponsible behavior.

Consider the many kings and queens who were handed the reigns to their country before they were old enough to put their own pants on.

Now consider: many of these people held the fate of nations in their hands.

A history pockmarked with unnecessary wars, massive public debts, and plain incompetence proves it out: leadership is not an inherited trait. Wisdom and judgement are gained through experience, not via bloodline.

These days, most surviving monarchs are more figurehead than supreme ruler. After all, the industrial revolution has ushered in modern times and modern thinking. Or has it?

Anyone working in business might guess where this is headed. Transference bias at its core presupposes that knowledge is not a requirement for climbing the ranks of leadership. (“If Jones displayed a hardworking can-do attitude over in sales, by golly he can certainly run the finance department!”)

Today, “character, positivity, and fortitude” are the new blue blood in business.

Not that these traits aren’t good things for a leader to have. Most certainly they are. But when it comes time to make hard choices, the sunniest attitude in the world is no substitute for expertise. It’s the same way that Count Chocula’s noble birth doesn’t guarantee him wisdom in the deployment of his infantry.

Unfortunately, transference bias never died, merely dressed itself in new clothes. And like the old kingdoms at war, there is much collateral damage.

So the next time your new boss shows up green behind the ears, fresh from some other unrelated department, remember it could be worse. Your cubicle could be a castle wall, facing a catapult attack of dead rotting cows. And if there’s one thing we can all agree on, it’s that nothing is worse than dead rotting cows. Except for maybe the new minty-fresh boss you’re about to train…

Einstein, Allie Brosh, and the Secret to Procrastinating With Style

When you contemplate your life, wondering what it means to be alive, it’s unlikely the first thing that came to mind was ‘office work.’

And yet arguably the life you lead at your desk inhabits a great deal of mental real estate. The sheer number of hours typically spent at work guarantees that the office and all it entails is fundamental in understanding and explaining the big picture of your life.

Work may or not bring out the best of us, depending on our tasks and whether we are able to get into flow as defined by Czekmentchiayli. But observation suggests there is one constant in human behavior you can expect to see wherever you find a shantytown of office cubicles.

The idea was coined by Allie Brosh, of Hyperbole and a Half fame. In a recent interview with Terry Gross, Brosh explained how she started her now-famous internet comic when she was supposed to be studying for finals: “I’m laterally productive. I will do productive things, but never the thing that I’m supposed to be doing.”

The elegance of lateral productivity is it allows you to put off tasks indefinitely without guilt. After all, you aren’t loafing around accomplishing nothing. You’re working hard! You don’t have time to file those reports.

It’s Einstein’s theory writ small: two objects can’t occupy the same space at the same time. In this case, two tasks can’t occupy the same brain space, since the human brain is notoriously poor at multi-tasking.

Lateral productivity allows you to play the hero to your self-imposed task villain. You find yourself in a kind of self-perpetuating state of activity, trapped in some Escheresque landscape where you’re diligently drawing the next set of stairs right before you ascend or descend said stairway. Technically, you’re moving, but there’s no forward progress to be had.

This art form is practiced by workers and managers alike. Lateral productivity can go by many other aliases, including “special side project”. The naming lends a measure of credibility. Throw in some metrics, build some color graphs, wrap it up in a Power Point presentation and you’ve got the makings of an entirely new task, which at some point lateral productivity will force you to abandon for something else.

Perhaps the best spokesman for lateral productivity was the great New York Yankees catcher Yogi Berra. Driving around one day, unsure of his location or the route to his destination, he reported, “We may be lost, but we’re making great time.”

The Final Word on Word-of-Mouth

Let’s start at the location where every single sale begins. I’m talking, of course, about a customer’s brain.

Inside each customer’s skull is enough neural pathways to go around the moon and then circle the earth 6 times. It is this web of myriad connections that will decide whether to or not to make the purchase.

And that decision, the one that has enormous implications for you and me, our families, the economy, and virtually everyone else on the planet, begins its journey in the future.

Whenever someone decides to purchase a product, they begin the journey with a kind of thought experiment, imagining how their life will be with their new acquisition. This can take the form of a vague notion (‘Wouldn’t be nice to have a new pair of running shoes?’) or it might be a little more concrete (‘I want Chuck Conner All-Stars in bright orange with white laces, size 10 1/2’).

It is the job of a salesperson to usher these movies in our heads into reality, which often means helping to define that imagined experience.

Not so long ago, the first step to a sale might have begun with the yellow pages. Today, the Internet is where over half of new customers will plot the beginning and, in some cases, the end of the journey. This is why companies large and small devote major dollars to making their websites as vibrant as possible.

So does all of this mean traditional brick-and-mortar is going the way of the dinosaur? The demise of long-time retailers like Montgomery Ward, and more recent ones like Borders and Blockbuster, suggest that the Internet is definitely changing the purchasing landscape. In 2012, sales conducted online racked up over a trillion dollars worldwide.

In his book Contagious, Jonah Berger lays out the truth about the power of the internet as influencer. He points out the following:

1. Word of mouth is the primary driver of 20-50% of all sales in the USA

2. Including blogs, emails, and all social media, how much of that happens online? If you’re like most people, you’d guess around 50-60%. After all, it doesn’t take a mathematician to know that $1 trillion is a lot of money. However, according to the Keller Fay Research group, the real figure is (drum roll please): 7%.

3. Not 70%. 7%.

How is that possible?

Even if you, like the average American, spend two hours of each day online, most of your life is still conducted in “unplugged” mode. Even factoring in sleep, you spend 8 times more time dealing with people face-to-face, sharing your thoughts with the 150 people closest to you. (150 is the maximum number of real relationships one can actually juggle in their life according to anthropologist Robin Dunbar)

Your social group is powerful. You might have read incredible things about Nike on Consumer Report’s website, but if your friend told you she just bought a pair of Nikes and had a miserable experience, all of the carefully compiled statistics go out the window. Your friend’s word of mouth trumps the feedback of thousands of strangers.

And even though the average tweet or Facebook post has the potential to hit 100 people, less than 10% of them actually get read.

Yes, I can get up in the middle of the night half-asleep, turn on my computer, log onto Zappo’s, buy a pair of running shoes and stumble back to bed. And yes, that single item-based sale is susceptible to that sort of enterprise.

Still, when it comes to making purchasing decisions, your inner circle of family and confederates hold tremendous sway over the shopping center in your brain.

Who Are You? The Science (or Lack Thereof) of Myers-Briggs

If you’ve been hired for a job in the last thirty years, chances are you’ve heard of Myers-Briggs. It’s a personality diagnostic tool used by everyone from self-searching college kids to Fortune 500 companies.

It’s understandable why employers embrace the Myers-Briggs Inventory. If there is a way to figure out ahead of time whether or not you’re going to ‘fit in,’ it could save the company money in the long run, and you might avoid working for a company you don’t like.

If the test can prove you’ve got the right stuff, maybe you’ll even skip a couple of rungs on your way up the management ladder. Perhaps someday you’ll be the one ordering the personality testing of the young upstarts seeking to unseat you from your hard-fought throne.

Obviously, when it comes to business, this is all a pretty big deal.

If you’re like me, you probably believed the Myers-Briggs was supported by some serious clinical evidence. After all, this is a common, widespread, accepted tool. People embrace their Myers-Briggs designation, labeling themselves ENFJ or ISTP the same certainty as height or blood type.

Surely it’s been all proven out through a double-blind study, or perhaps studies. We’re talking the kind of careful, thorough science necessary when people’s egos, and personal livelihoods, are balanced on a handful of test answers.

The problem is that science has the same relationship with Myers-Briggs that it had with alchemy back in the Dark Ages. It’s true that Myers Briggs has turned into gold, but a different kind of gold to be sure.

Isabel Myers, daughter of Katharine Briggs, conjured up the personality test at her kitchen table in the forties, shortly after World War Two. She had no formal training in psychology or testing. She based her system on her reading of Carl Jung, who had in turn been a student of the famed Sigmund Freud. Jung suggested in one of his writings, Psychological Types, that human behavior seemed to break down into categories.

Myers, believing that Jung was on to something, went on to build her personality test, placing people on four different continuums: Introvert/Extrovert, Thinking/Feeling, Intuitive/Sensing, and Judging/Perceiving. These variables allowed for 16 different combinations.

When she sent her personality inventory, or ‘Indicator’, to Jung, he was lukewarm, suggesting to Myers that an individual’s personality was far too complex to capture with a set of clever questions. Undeterred, Myers made the rounds of academia, hoping to drum up support for her newly minted system. She was repeatedly turned away due to an utter lack of scientific bonafides.

But in true entrepreneurial spirit, Myers soldiered on. She eventually found a buyer in Henry Chauncey, who had just started up a company called Educational Testing Service. You might know it better as the Scholastic Aptitude Test, or SAT.

Back in the late fifties, Chauncey decided that a personality test would be a nice addition to his fledgling college entry exam. According to Annie Murphy Paul in her book, The Cult of Personality, Myers was then able to leverage her Chauncey connection into some measure of respectability.

Unfortunately for Myers, one group never fully bought in: psychologists. They have their reasons, including studies where people’s answers on the Myers-Briggs can differ significantly depending on the time of day it’s administered. One study showed that up to 50% of the people who are given the test a second time end up with a different personality profile.

It’s also been suggested that because of the way the test was constructed (little emphasis on negative traits), most people tend to accept their results without examining it too closely.

So what’s the bottom line? Is Myers-Briggs a bunch of baloney? At least on an anecdotal level, there certainly appears to be some connection between people and distinct communication styles. Even Hippocrates observed that people seem to fall into four different types.

However, it seems like an overreach to suggest Myers- Briggs, or any test for that matter, could ever capture one’s personality as neatly as trapping a firefly in a glass jar. We’ve witnessed this oversimplification before, with the idea that a single number can represent the breadth and width of one’s IQ.

Myers-Briggs, like IQ tests, do tell us something about ourselves, and it probably makes sense to consider them as one interesting set of data points on an incredibly complicated spectrum of human behavior. But in the end, putting too much emphasis on a personality test appears to be less about science and more about alchemy.

And you always wondered what went on in HR behind closed doors…

Cause and Correlation, or the Pirate Problem


As you can see from the above graph, global warming is pirate-based.  It’s something I think we all suspected, but were hesitant to advance until the facts could be summarized in a handy graphic.

There is something about information delivered via graph that instantly lends an air of unassailable authority. The person trapped in the cube next to you, or even the guy down at the gas station couldn’t possibly carry the credibility of a simple graph.

It is an axiom of business that any presenter worth his or her salt is going to fill their PowerPoint with charts and graphs. The more the better, and the more oblique and difficult to read the best. Data delivered with a graph says “Here is the evidence, plain and simple. Let the ascending and descending lines tell you the story.”

The problem with the story, as with the graph above, is that we aren’t just suckered into believing correlation implies causality. We start thinking correlation is causality. Governments, businesses and individuals make this mistake on a daily basis. It’s impossible to calculate the frequency or the magnitude of the resulting financial loss, but it’s enormous.

We all know the crowing of the rooster doesn’t cause the sun to rise. But when rates of breast or prostate cancer is associated with soymilk, or some new drug, it can frequently drive us towards some definitive action, even though the connection of data points might in actuality be more rooster/sun than cause/effect.

The correlation/causality problem goes back a long way. The early human brain, confronted by the rustling of the bush, might naturally assume it was a tiger and not the wind. Erring on the side of safety could make the difference between life and death. Assuming correlation as causality was a small price to pay. This evolution based brain bias is still part of our biology today.

Here are four questions worth considering the next time you’re faced with the seductive whisperings of an X-axis.

1. Where do the represented data points come from?  Groups and individuals might be selectively mining the facts based on their own private agenda.

2. What do the data points represent?  Tiny samples can lead to casual conclusions that would be dismissed in a more robust survey population

3. Was this a blind study? A control group gives you some yardstick by which to judge the rest of the information.

4. Could other factors be in play?  This is probably the most abused problem with the correlation/causality mix-up. Maybe there is some relation between the X axis and the Y axis, but they could just as easily be responding to some other, third influence.

In the case of the rooster/sun problem, you’d want to consider both planetary revolution and the circadian rhythms of diurnal animals. (Additionally, if you’ve ever been on a farm, you’ll know that while roosters do crow at daybreak, those feathery little jerks will also sound their alarm in the middle of the night.)

So the next time some newscaster announces that eating peanut butter has “been linked to” autism, think back to our little graph. And remember: despite the insistence of Pastafarians everywhere (a group inspired by a modern-day Russell’s teapot analogy), most meteorologists agree that pirates have next to no effect on the climate.

Fans of buccaneers, privateers, and skallywags can let out a “Yarr!” of relief.

The Top Six Errors in Unbalanced Brain Strategy

Recently, I talked about whole brain strategy.

This week, a look at what happens inside a workplace when a company or organization tries to implement a new policy without understanding how the human brain works. If you’ve ever witnessed a giant disconnect between the systems a company claims to use, and the way their employees actually operate (call it Ghost Ship Syndrome, if you will), unbalanced brain strategy may very well be to blame.

So without further ado, I give you: the top six errors in unbalanced brain strategy: Continue reading