Algorithm, algorithms

How can AI help build equitable societies?

Artificial Intelligence and algorithms have transformed business efficiency but there’s a dark side to them as well.

AI technologies are known to cause bias and discrimination, while there is a fear that they can heighten geopolitical risk and have an adverse social impact. Keith E. Sonderling, Commissioner, U.S. Equal Employment Opportunity Commission and François Candelon, Global Director, BCG Henderson Institute tell us how AI and algorithms can be used to create more equitable societies.

Eric Schmidt, former chief of Google, has reportedly committed more than $100 million in AI research to solve hard problems like bias, harm and geopolitical conflict. What do you say to that?

Keith: I think that's a good thing that we should all support. Although it (AI & algorithms) could do a lot of good, there is the potential for serious harm here if we do not address these issues on the front end. And I believe the technology is still so new and still being developed, that addressing this now and putting money into it now will really help it flourish in the marketplace.

You work with the US government and have earlier stated that unless used carefully, algorithms could multiply inequalities in the job market. You have also asked the question, ‘do robots care about your civil rights?’ Could you elaborate on your thinking?

Keith: AI can help eliminate bias from the earliest stages of the hiring process. And that's a very, very good use of AI. But at the same time, if it is poorly designed, and it's carelessly implemented, it can scale discrimination to larger than one person. And you know why? That's because it's based on the predictions AI makes.

We can't be against AI because of the amount of money that's going into this and the number of developers across the world. It is here, and it is not going away. It's just how do we do it ethically now. And how do we do it in accordance not only with the laws that I'm concerned about in the United States but because it's being used globally? How do we all get involved in this conversation to make sure that it's used in accordance with civil rights across the world.

What are the risks as a society from a geopolitical, environmental and economic point of view?

François: This is a revolution. Therefore, we will have a typical Schumpeterian creative destruction cycle – jobs will change. Some will be destroyed, some new jobs will appear. But it's not like if you lose a job somewhere, it will be created in the same place. Therefore, there are some locations that will lose, while others will win. Now, given how advanced they are compared to the other regions, the US and China are better positioned to take advantage of it. But I think that therefore we have real geopolitical questions.

The other risk, which is less geopolitical, less about the economy and employment, but we need to realise is that while AI can help to fight against climate and global warming, it has a cost as well. For instance, training one single model is the CO2 equivalent of five cars across their lifetimes!

Keith: I agree. It's a massive element, but hitting every single door has the potential to hit every single aspect of life to the scale we've never seen before. It's being used in every aspect of life, including some areas that deal with significant civil rights implications, your livelihood, and your ability to earn a living. And that's why we all need to have this discussion early before it gets too far down the road and really causes potential inequities.

What type of mechanisms would there be to secure that algorithms and AI are being developed in high quality, perhaps shared and made sure that they are then available on a wide scale?

François: I very often take Finland as an example. All the online courses that you've developed to make your population more aware of what AI is; training, upskilling, reskilling is the (best) way to understand what AI can do and what it cannot do. This is why governments have a critical role to play in creating this awareness for the entire population.

Keith: People designing and developing AI are not labour and employment lawyers or they're not human resources professionals. They're computer scientists and engineers who know how to code. So, it is upon us to inform them and educate them about where it is being used, whether it's ethics, whether it's legal obligations, so those can be built into the system. I think regulators need to have a seat at the table with the developers earlier. That is unique, and we haven't seen that before.

What, in your opinion, is the level of AI maturity amongst the C-suite in corporate life today?

François: Very often, the C-suite doesn't understand what AI does. They are not counter balancing what the data scientists will do and the decisions data scientists will take without trying to be malicious. So, maybe we would need to have an AI driver's licence for the CXOs so that they understand the rules and the impact it can have.

Keith: I completely agree with you. How do we get the C level (who may not be involved in the purchase) to make sure that from the company's perspective, there is lawful and ethical use of AI in their company no matter what it's being used for and is ingrained just like other areas of ethical business practices? How do we do that with AI? I don't think they understand the risk or have the systems in place, like they do for other areas of their business, where they have long-established potential liability and have systems in place.

How can AI be leveraged in a positive way to build equitable societies?

François:  With AI, you have four typical advantages: the ultra granularity of data, the ability to deal with massive amounts of data, real-time decision-making at scale and continuous learning and improvement. And when you take these four elements you can do many great things.

For instance, Egypt wanted to improve corn crops, but they didn't know where they were located. And by using data they were able to customise the opportunities and the resource allocation. I've plenty of examples about forecasting floods and droughts, where you can really do something extremely specific.

But we need to be careful not to become too dependent. Keeping human supervision is something critical, but it won't be that easy. Now it's okay, because we have people who have gone through this learning curve and therefore, they are able to challenge AI. But in the next 5-10 years, how are they able to challenge it? For me, this is one of the key issues or key questions we need to ask ourselves.

How can AI be used to bridge the social divide?

François: I'm advocating the fact that some companies now are making a mistake trying to optimise human on one side and AI on the other side. I think companies need to think about human plus AI as a system. AI can be a great support, and they need to make sure that it is leveraged with its full potential, but at the same time with humans on top of it because we have different capabilities, we're able to deal with ambiguity much more than AI.

Keith: In the United States, the employer has the obligation under the Americans with Disabilities Act to actually engage with that employee to make sure that they're okay, to make sure that if they're not hitting their targets because of disability, then you have to make an accommodation for that. Now, at this point, I don't know outside of science fiction movies that AI can actually see that human emotion, can actually see that a person is struggling because of a disability, or because they need an accommodation in the workplace so well, as a lot of companies are just turning over these managerial functions to AI. However, a human still must be in a loop.

François: We do a joint report on the impact of AI on corporations with MIT every year. And in the report, we had in 2020, we found that the companies investing in AI and getting significant financial benefit from these investments were the ones that were creating a kind of mutual learning between human and AI.

Keith: I think that is universally applicable. That's it, I mean, that's the secret formula!

This conversation has been abridged. To hear the complete conversation, please listen to the podcast