Posted by on
Tags: , , , , , , , , ,
Categories: Uncategorized

Our commitment to the responsible development and deployment of artificial intelligence

Artificial Intelligence (AI) can amplify human capabilities and allow organizations to derive value and actionable insights from data. AI holds the potential to solve some of society’s biggest, most pressing challenges, but the “artificial” in AI also presents risks and adverse impacts to society because it takes some of the human element out of the equation. HPE is committed to the responsible, ethical development and deployment of AI as a means to advance the way we live and work.

Respecting human rights is a fundamental belief at HPE and is embedded in the way we do business.Our company-wide human rights impact assessment (HRIA) in 2019 highlighted responsible product development and use as two of HPE’s most pressing human rights risks. Within this space, AI is of particular importance, due to the unique risks it can create for society – especially to minority communities and the vulnerable, oppressed, or economically disadvantaged.

To address the unique human rights risks associated with AI, HPE has established its first-ever AI Ethics Advisory Board. The Board is responsible for ensuring that the use, development and deployment of AI products and solutions by HPE and our customers aligns to our ethical standards.

Moral Code: The Ethics of AI
Advances in computing and the existence of entirely new data sets are ushering in AI capable of realizing milestones that have long eluded us: curing cancer, exploring deep space, understanding climate change. That promise is what fuels our society’s excitement and investment in AI, but it also raises the need for real, honest dialogue about how we build and adopt these technologies responsibly. Learn more:

Our ethical principles

While we have always held ourselves as a company to the highest ethical standards, we felt it was necessary to put specific form and structure around what ethical conduct means specifically for artificial intelligence. The Board’s first action was to define a set of AI Ethical Principles to guide how we use and develop responsible AI that has beneficial outcomes for people, businesses, and public services. In our view, AI should be:

1.     Privacy-enabled and secure – respect individual privacy and be secure.

2.     Human focused – respect human rights and be designed with mechanisms and safeguards, e.g. to support human oversight and prevent misuse.

3.     Inclusive – minimize harmful bias and support equal treatment.

4.     Responsible – be designed for responsible and accountable use, inform an understanding of the AI, and enable outcomes to be challenged.

5.     Robust – be engineered to build in quality-testing and include safeguards to maintain functionality, minimise misuse and the impact of failure.

The next step is to focus on how HPE can most efficiently and effectively put these principles into practice; identifying and mitigating risks before the technology is put into use. We are establishing procedures to assess the ethical risks of AI we use or develop and partnering with the functions and business groups to develop the tools and guidance they need to understand and mitigate the ethical risks of AI.

AI is at the forefront of technology, and much of its potential is still largely untapped. The Board is working closely with Hewlett Packard Labs, and various R&D teams across the company, to identify and accelerate the most promising innovations that can make HPE a leader in this area. We recognize that our approach to managing the ethics of AI will equally need to evolve as we learn, and as best practices and regulatory guidance emerge.

As our CEO, Antonio Neri, has said, “We believe the enterprise of the future will be edge-centric, cloud-enabled, and data driven”, and HPE has an important role to play in supporting an ethical approach to the use of AI in that new world.

Read more here:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.