Skip to content

Unilever's AI Ethics: Transitioning from Policy to Practical Implementation.

"Knowing the distinctions between actions one has the legal liberty to take and actions that align with moral standards is vital - according to Potter Stewart. A significant number of prominent corporations acknowledge the escalating significance of artificial intelligence (AI) for their...

AI Ethics Transformation at Unilever: Shifting From Policy to Practical Implementation
AI Ethics Transformation at Unilever: Shifting From Policy to Practical Implementation

Unilever's AI Ethics: Transitioning from Policy to Practical Implementation.

In the rapidly evolving world of artificial intelligence (AI), a growing number of large companies are recognising the importance of integrating ethical considerations into their AI strategies. One such company is Unilever, which has successfully implemented an extensive AI assurance process to ensure its AI applications align with human interests and well-being.

According to recent data, over 70% of large companies globally are building AI applications across various business functions. However, the journey towards ensuring ethical AI practices is still in its early stages. Unilever, a global leader in consumer goods, is setting an example by prioritising a comprehensive AI assurance process that emphasises clear accountability, efficacy risk assessment, and external collaborations in its ethics framework.

Unilever's AI review platform automatically analyses machine learning algorithms for bias against specific groups, scoring the potential risk across domains including explainability, robustness, efficacy, bias, and privacy. The platform requires proposed use cases to include details such as purpose, business case, ownership, team composition, data used, AI technology type, development method, and degree of autonomy.

The core principle of Unilever's AI assurance compliance process is to assess the intrinsic risk of each new AI application, considering both effectiveness and ethics. This approach goes beyond basic policies, leading to the identification of the need for human oversight in certain cases. To manage the AI assurance review process, Unilever collaborates with a London-based firm specialising in AI risk assessment.

Common challenges companies face in implementing ethical AI include algorithmic bias, transparency and explainability, privacy and data protection, data governance and accountability, and skill gaps and integration challenges. To address these issues, leading companies are establishing structured frameworks and proactive measures.

For instance, IBM has created an AI Ethics Board and a principled framework focused on fairness, transparency, and explainability. They provide open-source toolkits like AI Fairness 360 to help developers detect and mitigate bias from early design phases.

In addition to Unilever's efforts, the European Union's proposed AI Act is being used as a basis for evaluations on the platform, categorising AI use cases into unacceptable, high, and not high enough to be regulated risks. The external partner anticipates future capabilities to aggregate and benchmark data across companies, assess benefits versus costs, the efficacy of different external providers for similar use cases, and optimal AI procurement approaches.

In summary, Unilever's commitment to ethical AI is evident in its comprehensive approach, which emphasises fairness, accountability, and transparency, and aligns with industry best practices. By implementing governance frameworks, engaging diverse stakeholders for oversight, and leveraging external standards and certification, Unilever is leading the way in ensuring trustworthy AI deployment.

Machine learning, a critical component of artificial intelligence (AI), is being closely monitored by Unilever's AI review platform to identify and mitigate bias against specific groups, which is a common challenge in implementing ethical AI practices. Moreover, Unilever's assurance process for AI applications integrates data science and technology by relying on external collaborations, such as a London-based firm specializing in AI risk assessment, to manage the review process and align with industry best practices.

Read also:

    Latest