Over
Andrew Chakhoyan
Andrew Chakhoyan is an alumnus of the global leadership program of the World Economic Forum, and he is widely recognized as a thought leader on topics, such as exponential technologies, societal trust in business, and public policy in tech.
Andrew holds a Master of Public Administration degree from Harvard and is a regular contributor to several publications: WEF Agenda, Futurism, and NewCo Shift.
He is a mentor to a number of startups and also provides coaching advice to senior executives on topics ranging from corporate diplomacy to global growth strategies. Andrew has developed a proprietary methodology to conduct board-level multi-day strat sessions. He strongly believes that your company’s strategic narrative must drive decision-making and not the other way around.
A deft and passionate keynote speaker, Andrew is also a skillful moderator who had hosted panel conversations around the world, from Davos to Dalian and from Washington to Oslo. Before launching his speaking career, Andrew’s contribution, as a panel discussant, was sought by the world’s leading global conferences, such as The Economist, OECD, TNW, etc.
Andrew Chakhoyan is a trisector leader, who has served in the U.S. Government working on economic diplomacy, pushed for stronger global cooperation in the nonprofit sector, and tried himself in a corporate world. His first job out of college was a barista at Starbucks and a part-time math professor at a community college.
1. Where did all the managers go?
“The key to management is to get rid of the managers,” advised Ricardo Semler, whose TED Talk went viral, introducing terms such as “industrial democracy” and “corporate re-engineering”. It’s important to point out that Mr. Semler isn’t an academic or an expert in management theory, he is the CEO of a successful industrial company. His views are unlikely to represent mainstream thinking on organizational design. But perhaps it is time we redefine the term “manager”, and question whether the idea of “management” as it was inherited from the industrial era, has outlived its usefulness.
2. How would you spell future in emoji?
Anyone who’s ever looked into retirement planning must have pondered the two basic questions: at what age does one want to retire and how does one estimate her life expectancy? A few decades ago, there was nothing daunting about those questions. There’d be no reason to doubt the life expectancy statistics or consider possible variations of the retirement age, which was all but assured somewhere around 60-65.
No so today. Average life expectancy heading for 100, read a sensationalist newspaper headline from 2015. But how sensational was it really, when Scientific American declared a few months ago that “aging is reversible,” citing successful lab experiments on mice?
Jack Ma, the founder of Alibaba Group, wondered if, a couple of centuries from now, we might need to legislate the caps on lifespan.
Effectively predicting that the future scientific discoveries and advances in technology will allow for indefinite deferral of aging. Most of us are probably skeptical about the prospect for human immortality, yet others may posit that 200 years, at the current pace of progress, is much too long for humanity to reach this milestone. And they will argue that beyond indefinite lifespans, we’ll achieve breakthroughs that we can’t even imagine today – be it colonization of space or self-aware artificial intelligence.
On the other hand, climate scientists warn of the devastating effect global warming might have on human civilization if we are unable or unwilling to reduce greenhouse emissions in time. Potentially catastrophic risks abound and could, quite possibly, thwart the positive development trajectory.
3. The digital commons: the future of business and the business of the future.
Have you ever wondered how much public trust in the Tech Sector is worth? Now we know. It is at least half a trillion dollars?—?the amount of losses spanning the two weeks following the first revelations in the Cambridge Analytica scandal. With so much value at stake, this is a moment of reckoning for the giants of the Silicon Valley. It is also our chance to reimagine how the digital commons are organized and administered.
4. We’re losing trust in business. How can we get it back?
Business is “on the brink of distrust”, declared Edelman’s Barometer earlier this year. It is both telling and alarming that trust in all four institutions – government, media, business and NGOs – has deteriorated in 2017 from 2016. This trend reversal is significant given that we’ve witnessed a major rebound of trust following the financial crisis of 2008 and saw the index climb to post-recession highs – until now.
It is clear that the expectations of business are changing as rapidly as the world around us. “Corporations must find a way to lead” was the consensus view that emerged from Edelman’s poll. Seventy-five percent of respondents agreed that “a company can take specific actions that both increase profit and improve the economic and social conditions in the community where it operates.”
A contemporary CEO cannot afford to ignore this sentiment. The epoch of corporate social responsibility (CSR) as a cost of doing business has passed; the era of “doing well by doing good” is upon us. Balancing the profit motive with the creation of societal value is about to become a precondition for the long-term success of any corporation, sector, scale or geographic reach notwithstanding.
5. Artificial Intelligence: The Good, The Bad, and The Unfathomable
If the future is no longer predictable, is it still imaginable?
No stranger to controversy, a Tony Stark reincarnate?—?Elon Musk?—?came out with an ominous prediction recently. “Forget North Korea, AI will start World War III” read the CNN headline. Elon Musk is not alone in fearing unintended consequences of the race to develop algorithms that we may or may not be able to control. Once a new technology is introduced it can’t be uninvented?—?Sam Harris points out in his viral TED talk. He argues that it’ll be impossible to halt the pace of progress, even if humankind could collectively make such a decision.
While Bill Gates, Stephen Hawking and countless others are broadly on the same page with Musk and Harris, some of the leading thinkers recognize that AI, like any other technology, is value-neutral. Gunpowder, after all, was first used in fireworks.
Ray Kurzweil argues that “AI will be the pivotal technology in achieving [human] progress. We have a moral imperative to realize this promise while controlling the peril.” And, in his view, humanity has ample time to develop ethical guidelines and regulatory standards.
6. What if everyone took a sabbatical?
The mythology of the startup world is full of epic stories. Stories of struggle, serendipity, failure, perseverance, and the journeys through the dire straits to wealth and glory. Everyone’s favorite is of course the Gates-Zuckerberg fable. You start by getting into a prestigious program just so you could drop out, launch a startup from a dorm room, and then… the tech giants of the century are born.
A less well-known story is that of Marc Benioff, who showed exceptional abilities from a young age?—?joining Oracle at 23 as a “Rookie of the Year” and landing a VP role just 3 years later. But then, following a spectacularly successful tenure of another decade at the database giant, he decided to take a sabbatical.
Today, Benioff is worth over $4.4B. But he didn’t earn it at Oracle. Returning from a break, he founded a cloud computing company?—?Salesforce?—?appeared on the cover of Forbes, and is presently considered one of the Silicon Valley’s top visionaries. So it is safe to say that his decision to take time and reflect on what he really wanted to do with his life proved momentous.