As a machine learning scientist and CEO of a leading AI company, I have devoted my working life to building Artificial Intelligence (AI) systems that can improve human lives. AI’s ability to process large quantities of data goes far beyond the abilities of any human expert. From enabling personalised evidence-based healthcare for patients and better foster care for children, to its use for complex medical research: AI has the potential to transform our society for the better.
Worldwide discussions on the ethical principles of AI are now taking place. Moral and social resistance to the impact of this technology drives most of them. I think this resistance offers great potential. We must embrace it as a wake-up call: treating it not as a direct cause of alarm, but as a valuable societal warning sign.
This blog is the second in a series of blog posts in which I will share my perspectives on different resisting groups to AI technology, read my first blog on cultural resistance.
Only 25% of companies in Europe are adopting Artificial Intelligence. This means 75% of companies are facing barriers to adoption. I have personally witnessed corporate resistance to AI as a vendor in this space, mainly by middle and top management groups. What are their main barriers to refrain from adopting this technology?
The corporate world lacks a thorough understanding of AI and does not see how it is different from traditional software. AI requires wider access to corporate and third party data and shifts the software development workflow. This becomes a main barrier for adoption.
Traditionally, developers wrote software code as a sequence of hard-coded rules. The human instructs the machine, line by line, and once the software is developed you can run it on any computer. This type of software does not need to learn from data. It also does not need to update itself or adapt to changing environments. AI, on the contrary, is about a software that automatically updates itself. It learns to learn from corporate data and cannot be easily installed on a computer. For an AI system to work, company leaders must first share their data to train a model and then keep feeding it with more data for continuous updates.
This is a big shift in mindset for corporate businesses. Having to open your books to AI vendors and share your data to build this software can be a key deterrent - especially in the uptake stage.
Heavily regulated industries are generally more reluctant to innovate but rather quick to identify risks such as potential lack of control with cloud data sharing, or the risk of making analytical mistakes as a result of AI output. While this keeps them from early adoption, they tend to follow quickly surrounding successful stories.
Corporate resistance is far from absolute. In my work as CEO of OKRA Technologies, an enabler of the use of AI in businesses, I see early adopters who decide to invest in AI innovation among traditionally risk-averse groups. These individuals are aware of the benefits and risks surrounding AI, but nonetheless choose to make a calculated risk to meet business needs. They often have personal preference for innovation and act as industry leaders.
For middle management AI signals a threat to the organisational structure. AI promises to shrink the number of direct reports, which may result in a flattened hierarchical structure and impact a manager’s sense of pride and importance. We should not underestimate the extent to which resistance can be emotional, or emotional-political.
We cannot ignore that many corporates stand to lose money, influence or relevance as the use of AI increases. For example, certain organisations may obstruct the use of AI by preventing the spread of their own large proprietary data sets. Conventional consultancies may emphasise traditional values of bespoke consulting, emphasising the human personalisation of their delivery. Traditional software business will underline the importance of rule based software without the need to share massive amounts of data, and the importance of implicit learns that can be designed by humans.
How to reduce the risk of adoption
To increase trust in Artificial Intelligence adoption at the B2B level, we need to create sandboxing frameworks for testing new AI driven technology. We also need regulatory testing of new business models as is already the case in Fintech industry. This requires collaboration on the eco-system level.
It also highlights the need to explain AI technology in more simple terms and make its methodologies more transparent. For example, in how we achieve outcomes and support decision-making for different levels of the organisation. The key performance indicators for AI adoption should therefore not only include accuracy and results, but also adherence to AI-driven insights throughout an organisation.
It also reminds us that technological change can be emotionally difficult and potentially risky for business outcomes. Business owners are right to remember the strengths of their original processes and, to a certain extent, to protect them.
However, contrary to public debate, AI does not necessarily have to be disruptive. Instead, it can augment current processes to be more effective and efficient. There is no need to overhaul a functioning system, but there is almost always a need for improvement. With sufficient attention to design, AI promises to slowly but surely enable that improvement.
Certain groups are threatened by AI and will form a fierce resistance. Regulators must consider the balance between data ownership and accessibility in order to allow AI to be properly and comprehensively adopted.
As policymakers, we should consider how increasingly irrelevant skills could be re-applied in different ways. Or, if not, what the role of regulation could be in ensuring the development of social security and new working skills.
- Join the European AI Alliance to read other blog posts and discussions
- Read more about the High-level Expert Group on Artificial Intelligence