A case for AI optimized for trust

Co-founder and CEO of the math company.

With more leaders recognizing the usefulness of AI in unlocking actionable insights and effective problem solving, AI models are now increasingly becoming core value drivers for large companies. These models, however, are not without risks. Gartner estimates that by 2022, 85% of AI-generated insights will produce incorrect results due to bias in the data, algorithms, or teams managing it.

With AI still in the early stages of adoption around the world, risks such as learning limitations, cyberattacks, and lack of user understanding give rise to trust issues. Not only does this limit the scalability of organizations, it also creates gray areas in the quest to successfully align AI efforts and business strategy, slowing digital transformation.

Considering that the successful adoption of AI requires human trust, there is a need to bridge the chasm of faith that currently persists between AI systems and decision makers. The only way teams can do this is by developing trust-optimized models that compare intelligibility with fairness. But before we delve into how AI systems can be made more accessible and accountable, let’s look at what constitutes this fundamental “trust issue.”

data bias

Data often comes in incomplete or biased forms, leading to biases being “built in” to algorithms. Machine learning (ML) models are prone to algorithmic and cognitive biases and thus lead to analytical errors, biased results, and compromised accuracy. In real-world scenarios, this translates to missteps like the infamous internal AI recruiting engine that, fed historical recruiting data, chose a pool of candidates that was 60% male, highlighting the bias against people of other genders. .

low transparency

The accuracies of AI models are often inversely proportional to their interpretability. Add to this “closed box” ML algorithms, and it becomes more difficult for teams to understand why and how a model generates a result, thus impeding user trust.

lack of traceability

This lack of transparency carries another risk: traceability. With the proliferation of shadow IT services and teams turning to speculative SaaS applications, API security threats have increased: attackers can execute malicious code remotely, making it nearly impossible for computers track the levels at which the network has altered input parameters.

This can have serious implications for data security; for example, a route-manipulated classification application that creates customer groups through social listening may misclassify customer cohorts, impacting customer-centric decision-making efforts.

Why (and how) should we trust AI?

Until now, business leaders have compared the efficiency of AI to performance (how well it works), process (what functions it serves), and purpose (the value it delivers). However, the aforementioned factors have made evident the need for a new criterion to evaluate the usefulness of AI: trust.

Building trust in AI solutions is critical. However, the act of doing so is at a crossroads between applying simple algorithms for the sake of transparency or opting for opaque models that offer greater efficiency. This dilemma can be solved if the AI ​​can be trusted. optimized at some key levels.

Embedding ethics at the heart of AI

The concept of ethical AI goes beyond implementing best practices during model development: it involves changing the very fabric of AI. Infusing AI with ethical values ​​at its core would involve creating governance bodies and introducing enterprise-level AI ethics programs that align with business and industry regulations.

Putting ethics into practice across all systems, for example, taking into account impacts on society, climate and resources and employing responsible AI-powered technology to optimize supply chains and minimize waste, is a step that companies can take in this regard. Institutionalizing ethics in AI in such a way will not only help companies resolve issues related to bias and data transparency, but will also actively embed the human approach in politics in the long run, bolstering trust. the client’s.

Focused on humanization and empathy

We humans need the idea that an AI system must be trustworthy. That said, it is imperative that a system be human-centric.

AI is already performing near-human functions, recognizing speech and images through NLP; however, he lacks a key human trait: empathy. Infusing empathy into AI would mean developing algorithms that possess humanized decision-making capabilities and more “sensitive” data structures that account for the accuracy, reliability, and confidentiality of data. For example, AI-enabled learning platforms embedded with capabilities to observe the stress, confidence, and difficulties students face could help develop personalized course recommendations and encourage individualized learning.

By leveraging empathetically coded AI, businesses can gain granular data centered on the individual, enabling hyper-personalized experiences along with improvements in data quality and integrity, which is key to deepening trust.

Strengthen transparency with explainability

With AI systems becoming more complex, understanding their rationale for decision making has become nearly impossible. However, teams can gain a better understanding of such systems by incorporating “explanation” methods at different levels into a model and extending them to machine reasoning (MR), a field of AI that computationally mimics abstract thinking to make inferences about uncertain data. .

Impactful use cases for explainable AI (XAI) include context-aware systems in hospitals, where models can analyze location data, staff availability, and patient data, including vital signs, medical history, and image reports linked to records. of electronic health, to issue “reasonably”. alerts on patient status, mobilize staff and improve patient outcomes. For leaders, explainable AI provides better visibility into the behaviors and risks of those systems, giving them the confidence to take more responsibility for a system’s actions, and subsequently fostering greater confidence in adoption of the system. AI.

A future built on trust

In every industry, AI is rewriting the rules of engagement, and we can only trust it when we trust its inner workings. Improving a technology with confidence will not only take the risk out of innovation, it will also inspire responsible innovation. At this point, both AI developers and incubators need to make sure they build systems that meet not only legal rubrics but also ethical and emotional ones. Ultimately, AI backed by trust, transparency and traceability will facilitate unambiguous and robust models, reinforcing confidence in a secure future.


The Forbes Technology Council is an invite-only community for world-class CIOs, CTOs, and technology executives. Do I qualify?


Leave a Reply

Your email address will not be published.