Management gemsFind here some gems from our monitoring of the best publications on leadership and management
Change

How can we explain the decisions taken by artificial intelligence?
Decisions that rely on artificial intelligence (AI) have the particularity that we are not aware of the logical sequence that resulted in the recommended solution. Added to this is the use of statistical learning methods: “deep learning” rests on correlations between millions, or even billions, of parameters, which cannot be translated into explicit causal links.
And yet, in order to grant our trust, we require explanations. Isabelle Bloch, a professor at Sorbonne University, stresses the essential role of human beings in this regard. It is indeed a question of identifying how to judiciously compensate for the algorithm’s opacity, according to each specific case. The challenge is above all to choose the type of explanation to provide, depending on needs and the people we are addressing. Are we dealing primarily with an issue of trust, of ethics, of responsibility? What is our interlocutors’ level of understanding of AI? For example, we might choose to explain what are the data used, the operating principles of the AI used, the precautions to be taken when using its results, etc. Thus, the more AI develops, the more we will need to develop our ability to communicate about and discuss its results. A new skill set to be explored.
Source: Il faut justifier les décisions prises par un algorithme [Decisions taken by an algorithm need to be justified], interview of Isabelle Bloch by Sophy Caulier, Polytechnique Insights, December 2021.

“Things were better before”… Really?
We often hear this nostalgic chorus: “We can’t trust one another like we used to; people are more and more individualistic; incivility and violence are on the rise…” According to this ditty, our society is facing a form of moral decline.
Psychologists Adam Mastroianni and Daniel Gilbert reviewed hundreds of studies to analyze this worrying phenomenon. They discovered that the myth of “moral decline” has in fact been around since antiquity. In parallel, the study of actual behaviors shows, at worst, stability, and most often a progression of positive behaviors. We are less frequently at war, rules and laws provide a better framework for relationships and reinforce trust, we continue to help one another…
Why do our perceptions differ so significantly from reality? Two cognitive biases are involved: the negativity bias and the memory bias. Our brains give greater importance to negative information, which originally constituted a protective reflex. On the other hand, our negative memories fade more quickly than our positive ones, which allows us to distance ourselves from negative experiences, but can also lead to our idealizing the past.
As a result, we cannot help thinking, often erroneously, that things were better before. But knowing why we have this biased perception can help us put it into perspective.
Source: Déclin moral : pourquoi pense-t-on toujours que « c’était mieux avant » ? [Moral decline: why do we always think “things were better before”?], Adam Mastroianni, Polytechnique Insights, November 2023.

When it comes to AI, how can we avoid putting the cart before the horses?
Currently, most companies are pondering how they can best take advantage of AI at their own scale. Applications and experiments are thus flourishing, with varying degrees of success. Very often, the frustrations are commensurate with the hopes. And with good reason: AI, however "intelligent" it may be, can ultimately only do one thing—work from the data we provide it with. To capitalize on it, we therefore need centralized data of sufficient quality and quantity, and derived from a wide range of sources. In many organizations, however, this data is scattered among various business functions, each of which has its own systems.
So, before considering sophisticated generative AI set-ups, it is useful to carry out a quick diagnosis of your organization. Is tacit knowledge sufficiently formalized? Is it centralized? Are data collection and processing methods sufficiently standardized? Given the nature and quantity of the data collected, is there a risk of triggering biased responses from your AI system? Would you benefit from access to additional sources? This upstream work is essential to ensuring the quality of the AI's responses and maximizing its potential to help decision-making.
Source: Harnessing AI to accelerate digital transformation, The Choice by ESCP, July 2023.
Share
Investing in humans to reap the benefits of artificial intelligence
The global economic growth will essentially be driven by new technologies in the coming years, in particular by the innovations relating to artificial intelligence. Yet, seizing the AI opportunities entails much more than betting on the best technology.
Companies that present the best growth potential even consider that the technological aspect only represents a minor part of the challenge. According to a study conducted by the Boston Consulting group in 2022 among 700 business leaders from all sectors of activity in 47 countries, the companies best equipped to reap the benefits of the innovations brought by AI follow the “10-20-70” rule. Only 10% of their efforts are invested in the design of their AI model; 20% are dedicated to the collection of quality data; and the remaining 70% focus on the organization and the people. These companies primarily aim at attracting and retaining the best talents, at training their employees to obtain the right mix of competences, at anchoring a culture of innovation and at making their internal processes more agile and collaborative.
An analysis that invites to put some perspective on your investments: does their actual distribution enable the reinforcement of the company’s capacities to reap the benefits of the ongoing technological disruptions?
Source: The New Blueprint for Corporate Performance, Amanda Luther, Romain de Laubier, Saibal Chakraborty, Dylan Bolden, Sylvain Duranton, Tauseef Charanya, Patrick Forth, BCG, April 2023.

Distinguishing the truly significant weak signals from the ambient noise
Leaders and managers are often advised to scrutinize their market and customer data to spot possible “weak signals”—these micro-changes or these burgeoning expectations that prefigure future megatrends. But how can we determine if a given anomaly in the data is a weak signal, or simply a value that diverges from the average? The processing of important masses of data necessarily involves the presence of numerous anomalies, which does not however mean they are all significant.
To make a judgment, experts in strategy advise that we evaluate each anomaly according to three dimensions:
- Its dynamic: is the anomaly persisting over time? Is it rapidly growing? Do the pioneers in your sector appear to show close interest in it?
- Its robustness: does the anomaly appear in several sets of data? Is it coherent with other changes in your environment?
- Its impact: does the anomaly reveal a dead angle which is not covered by current offerings? What would be the consequences if it became widespread?
A simple analysis framework, to experiment in your next strategic thinking sessions.
Source: The Power of Anomaly, Martin Reeves, Bob Goodson, Kevin Whitaker, Harvard Business Review, July-August 2021.
To learn more :