> Manageris Blog
How  can we use AI as a partner in our thinking?

How can we use AI as a partner in our thinking?

New AI tools like ChatGPT can make for good allies in accelerating decisions and improving their quality. While there is no question of delegating the decision-making to them, it can be beneficial to involve them at three stages:

Ascertaining the context: ChatGPT helps highlight the obstacles and the key success factors taken into account by other companies in similar contexts. Sample request: we are a company in the technology sector, based in the PACA region of France. We’re having difficulties attracting new talents; what might be the reasons for this?

Defining the possible options: ChatGPT contributes to expanding the range of options and to generating counter-intuitive avenues. Sample request: how have some companies succeeded in limiting their dependency on a given raw material?

Evaluating the various solutions: for the time being, ChatGPT doesn’t allow you to compare the advantages of each option. But it can help you gain awareness of the biases that are harming the quality of decisions, in certain contexts. Sample request: what are the main risks to keep in mind when trying to recruit within a short time?

To obtain the best possible contribution from AI tools, interaction and questioning are key: we benefit from refining our questions and digging beyond the AI’s first answers.


Source: Using ChatGPT to Make Better Decisions, Thomas Ramge, Viktor Mayer-Schönberger, Harvard Business Review, August 2023.

 

How can  we avoid passing on discriminatory biases to our algorithms?

How can we avoid passing on discriminatory biases to our algorithms?

In 2023, eight out of every ten companies planned to invest in machine learning. This sub-field of artificial intelligence permits the detection of recurring patterns within data to guide decision-making.

Many decisions can thus be delegated to algorithms: selecting among candidates for a recruitment, for a loan…  But how can we educate our algorithm to avoid biases, and in particular discriminatory ones? Experimentations have indeed shown that AI risks amplifying discriminations already in play. This results from the fact that it relies on selection histories to carry out its training—histories that are often biased and lead to certain populations going under-represented.

In rather counter-intuitive fashion, a study on a credit-management algorithm suggests that sharing sensitive personal data with it, rather than masking this data during its training, allows to meaningfully reduce the risk of discrimination. Cherry on top: the profitability of the loans granted by this algorithm also increased by 8%. When it isn’t possible to include this data directly in the algorithm’s training phase, corrective factors can be applied to rebalance the samples it receives, for instance by increasing the share of traditionally under-represented populations.


Source: Removing Demographic Data Can Make AI Discrimination Worse, Stephanie Kelley, Anton Ovchinnikov, Adrienne Heinrich, David R. Hardoon, Harvard Business Review, March 2023.

AI versus AI, the fight of the century?

AI versus AI, the fight of the century?

The development of artificial intelligences is greatly increasing cybersecurity threats. Their use by hackers could enable deploying a combination of ultra-personalized attacks leveraging the company’s specific information. For instance, imagine a phishing call using an AI-generated voice that can near-perfectly mimic your boss’ tone and conversational style—a science-fiction scenario that is about to become a reality…

What if you used the power of AI to protect yourself from this risk? Some companies are already working on designing software tools, such as ZeroGPT, to identify AI-generated content. AI can also be used to improve cyber-risk detection capabilities. For instance, a customized AI will be able to easily detect suspicious changes in an employee’s online behavior—a sudden increase in the amount of data consulted, a significant change in messaging structure, etc.—and, if need be, trigger an alert. Of course, these new tools won’t go without raising ethical questions regarding the protection of personal data—but they will quickly become unavoidable. A new field to keep a close eye on.


Source: From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI, Karen Renaud, Merrill Warkentin, George Westerman, MIT Sloan Management Review, April 2023. 

What if you systematized “Live my life” initiatives?

What if you systematized “Live my life” initiatives?

More and more companies allow employees to spend a day experiencing the reality of another position. These immersive experiences encourage taking a step back and stimulate empathy. Indeed, they allow employees to look behind the scenes of other departments, to better understand the challenges faced by their colleagues and the efforts they deploy to produce the results expected of them. This exercise is interesting at every level of the hierarchy—as much for the newly-arrived employee discovering the diversity of roles within the company as for the executive wanting to confront reality in the field.

Although the idea is not a new one, it can be interesting to systematize it. Banque Populaire Auvergne Rhône Alpes, for example, experimented with the “TestUnMétier” (“TestAJob”) set-up. This enables people who have been flagged for internal mobility or progress to be put in contact with employees who currently occupy the targeted position. This kind of initiative is also useful in allowing “expert” profiles to try out a range of possible career evolutions beyond the assumption of managerial responsibilities. It can even be used to incite your employees to train for new, emerging positions within the sector and facilitate coming changes.

A new loyalty lever to be explored?


Source: Vis ma vie : l'expérience de cohésion d'équipe rêvée ? [Live My Life: A Dream Team-Bonding Experience?], Welcome to the Jungle, June 2023.

 

Getting inside your competitors’ heads

Getting inside your competitors’ heads

According to a study conducted by John Horn, the author of the book Inside the Competitor’s Mindset, between 30% and 40% of executives believe that their competitors act irrationally over half of the time. Often, this perceived irrationality actually masks an inability to put oneself in the competitor’s shoes. Yet this is an essential skill to increase your capacity to anticipate the competition’s moves instead of being subjected to them.

To achieve it, two avenues deserve to be explored. The first consists in actively engaging in what is known as “cognitive empathy”. Concretely, this means suspending your judgment and putting yourself in your competitor’s position, trying to understand why they think what they think and decide what they decide, factoring in the information they possess and their positioning within the market. The key is to start from the principle that what they do is perfectly rational from their point of view and then piece together the logic of their actions.

A second, complementary approach consists in using artificial intelligence tools to create predictive analyses. By looking at the data, can we discern how a given competitor responded to price increases in the past? Is there a recurring pattern that could provide us information about their potential response to our next offering to hit the market? An excellent way to stimulate a dynamic view of the competitive game and stay a step ahead.


Source:  Author Talks: How cognitive empathy can help you predict the competition’s next steps, interview of John Horn by David Schwartz, McKinsey, June 2023.

 

Free trial

Discover our synopses freely and without commitment!

Free trial

All publications

Explore