Time is a Flat Circle.  Jamie Dimon’s Comments on AI Just Proved It

Time is a Flat Circle. Jamie Dimon’s Comments on AI Just Proved It

“Time is a flat circle.  Everything we have done or will do we will do over and over and over and over again – forever.”

– Rusty Cohle, played by Matthew McConaughey, in True Detective

For the whole of human existence, we have created new things with no idea if, when, or how they will affect humanity, society, or business.  New things can be a distraction, sucking up time and money and offering nothing in return.  Or they can be a bridge to a better future.

As a leader, it’s your job to figure out which things are a bridge (i.e., innovation) and which things suck (i.e., shiny objects).

Innovation is a flat circle

The concept of eternal recurrence, that time repeats itself in an infinite loop, was first taught by Pythagoras (of Pythagorean theorem fame) in the 6th century BC. It remerged (thereby proving its own truth) in Friedreich Nietzsche’s writings in the 19th century, then again in 2014’s first season of True Detective, and then again on Monday in Jamie Dimon’s Annual Letter to Shareholders.

Mr. Dimon, the CEO and Chairman of JPMorgan Chase & Co, first mentioned AI in his 2017 Letter to Shareholders.  So, it wasn’t the mention of AI that was newsworthy. It was how it was mentioned.  Before mentioning geopolitical risks, regulatory issues, or the recent acquisition of First Republic, Mr. Dimon spends nine paragraphs talking about AI, its impact on banking, and how JPMorgan Chase is responding.

Here’s a screenshot of the first two paragraphs:

TITLE: Update on specific issues facing our company

BPDY TEXT: "Each year, I try to update you on some of the most important issues facing our company. First and foremost may well be the impact of artificial intelligence (AI).

While we do not know the full effect or the precise rate at which AI will change our business — or how it will affect society at large — we are completely convinced the consequences will be extraordinary and possibly as transformational as some of the major technological inventions of the past several hundred years: Think the printing press, the steam engine, electricity, computing and the Internet, among others."

He’s right. We don’t know “the full effect or the precise rate at which AI will change our business—or how it will affect society at large.” We were similarly clueless in 1436 (when the printing press was invented), 1712 (when the first commercially successful steam engine was invented), 1882 (when electricity was first commercially distributed), and 1993 (when the World Wide Web was released to the public).

Innovation, it seems, is also a flat circle.

Our response doesn’t have to be.

Historically, people responded to innovation in one of two ways: panic because it’s a sign of the apocalypse or rejoice because it will be our salvation. And those reactions aren’t confined to just “transformational” innovations.  In 2015, a visiting professor at Kings College London declared that the humble eraser (1770) was “an instrument of the devil” because it creates “a culture of shame about error.  It’s a way of lying to the world, which says, ‘I didn’t make a mistake.  I got it right the first time.’”

Neither reaction is true. Fortunately, as time passes, more people recognize that the truth is somewhere between the apocalypse and salvation and that we can influence what that “between” place is through intentional experimentation and learning.

JPMorgan started experimenting with AI over a decade ago, well before most of its competitors.  As a result, they “now have over 400 use cases in production in areas such as marketing, fraud, and risk” that are producing quantifiable financial value for the company. 

It’s not just JPMorgan.  Organizations as varied as John Deere, BMW, Amazon, the US Department of Energy, Vanguard, and Johns Hopkins Hospital have been experimenting with AI for years, trying to understand if and how it could improve their operations and enable them to serve customers better.  Some experiments worked.  Some didn’t.  But every company brave enough to try learned something and, as a result, got smarter and more confident about “the full effect or the precise rate at which AI will change our business.”

You have free will.  Use it to learn.

Cynics believe that time is a flat circle.  Leaders believe it is an ever-ascending spiral, one in which we can learn, evolve, and influence what’s next.  They also have the courage to act on (and invest in) that belief.

What do you believe?  More importantly, what are you doing about it?

Why Your AI Strategy has Nothing to do with AI

Why Your AI Strategy has Nothing to do with AI

You’ve heard the adage that “culture eats strategy for breakfast.”  Well, AI is the fruit bowl on the side of your Denny’s Grand Slam Strategy, and culture is eating that, too.

1 tool + 2 companies = 2 strategies

On an Innovation Leader call about AI, two people from two different companies shared stories about what happened when an AI notetaking tool unexpectedly joined a call and started taking notes.  In both stories, everyone on the calls was surprised, uncomfortable, and a little bit angry that even some of the conversation was recorded and transcribed (understandable because both calls were about highly sensitive topics). 

The storyteller from Company A shared that the senior executive on the call was so irate that, after the call, he contacted people in Legal, IT, and Risk Management.  By the end of the day, all AI tools were shut down, and an extensive “ask permission or face termination” policy was issued.

Company B’s story ended differently.  Everyone on the call, including senior executives and government officials, was surprised, but instead of demanding that the tool be turned off, they asked why it was necessary. After a quick discussion about whether the tool was necessary, when it would be used, and how to ensure the accuracy of the transcript, everyone agreed to keep the note-taker running.  After the call, the senior executive asked everyone using an AI note-taker on a call to ask attendees’ permission before turning it on.

Why such a difference between the approaches of two companies of relatively the same size, operating in the same industry, using the same type of tool in a similar situation?

1 tool + 2 CULTURES = 2 strategies

Neither storyteller dove into details or described their companies’ cultures, but from other comments and details, I’m comfortable saying that the culture at Company A is quite different from the one at Company B. It is this difference, more than anything else, that drove Company A’s draconian response compared to Company B’s more forgiving and guiding one.  

This is both good and bad news for you as an innovation leader.

It’s good news because it means that you don’t have to pour hours, days, or even weeks of your life into finding, testing, and evaluating an ever-growing universe of AI tools to feel confident that you found the right one. 

It’s bad news because even if you do develop the perfect AI strategy, it won’t matter if you’re in a culture that isn’t open to exploration, learning, and even a tiny amount of risk-taking.

Curious whether you’re facing more good news than bad news?  Start here.

8 culture = 8+ strategies

In 2018, Boris Groysberg, a professor at Harvard Business School, and his colleagues published “The Leader’s Guide to Corporate Culture,” a meta-study of “more than 100 of the most commonly used social and behavior models [and] identified eight styles that distinguish a culture and can be measured.  I’m a big fan of the model, having used it with clients and taught it to hundreds of executives, and I see it actively defining and driving companies’ AI strategies*.

Results (89% of companies): Achievement and winning

  • AI strategy: Be first and be right. Experimentation is happening on an individual or team level in an effort to gain an advantage over competitors and peers.

Caring (63%): Relationships and mutual trust

  • AI strategy: A slow, cautious, and collaborative approach to exploring and testing AI so as to avoid ruffling feathers

Order (15%): Respect, structure, and shared norms

  • AI strategy: Given the “ask permission, not forgiveness” nature of the culture, AI exploration and strategy are centralized in a single function, and everyone waits on the verdict

Purpose (9%): Idealism and altruism

  • AI strategy: Torn between the undeniable productivity benefits AI offers and the myriad ethical and sustainability issues involved, strategies are more about monitoring than acting.

Safety (8%): Planning, caution, and preparedness

  • AI strategy: Like Order, this culture takes a centralized approach. Unlike Order, it hopes that if it closes its eyes, all of this will just go away.

Learning (7%): Exploration, expansiveness, creativity

  • AI strategy: Slightly more deliberate and guided than Purpose cultures, this culture encourages thoughtful and intentional experimentation to inform its overall strategy

Authority (4%): Strength, decisiveness, and boldness

  • AI strategy: If the AI strategies from Results and Order had a baby, it would be Authority’s AI strategy – centralized control with a single-minded mission to win quickly

Enjoyment (2%): Fun and excitement

  • AI strategy: It’s a glorious free-for-all with everyone doing what they want.  Strategies and guidelines will be set if and when needed.

What do you think?

Based on the story above, what culture best describes Company A?  Company B?

What culture best describes your team or company?  What about your AI strategy?

*Disclaimer. Culture is an “elusive lever” because it is based on assumptions, mindsets, social patterns, and unconscious actions.  As a result, the eight cultures aren’t MECE (mutually exclusive, collectively exhaustive), and multiple cultures often exist in a single team, function, and company.  Bottom line, the eight cultures are a tool, not a law (and I glossed over a lot of stuff from the report)