{"id":28560,"date":"2021-11-27T11:43:45","date_gmt":"2021-11-27T11:43:45","guid":{"rendered":"https:\/\/www.thepicpedia.com\/blog\/adobe\/the-building-blocks-of-microsofts-responsible-ai-program\/"},"modified":"2021-11-27T11:43:46","modified_gmt":"2021-11-27T11:43:46","slug":"the-building-blocks-of-microsofts-responsible-ai-program","status":"publish","type":"post","link":"https:\/\/www.thepicpedia.com\/blog\/adobe\/the-building-blocks-of-microsofts-responsible-ai-program\/","title":{"rendered":"The building blocks of Microsoft\u2019s responsible AI program"},"content":{"rendered":"
\n

The pace at which artificial intelligence (AI) is advancing is remarkable.<\/p>\n<\/p>\n

The pace at which artificial intelligence (AI) is advancing is remarkable. As we look out at the next few years for this field, one thing is clear: AI will be celebrated for its benefits but also scrutinized and, to some degree, feared. It remains our belief that, for AI to benefit everyone, it must be developed and used in ways which warrant people\u2019s trust.<\/p>\n

Over the past few years, principles around developing AI responsibly have proliferated and, for the most part, there is overwhelming agreement on the need to prioritize issues like transparency, fairness, accountability, privacy, and security. Yet, while principles are necessary, having them alone is not enough. The hard and essential work begins when you endeavor to turn those principles into practices, and that is work that we and many of our customers and partners are engaged in now. Below we share some of the decisions we have made along the way, as well as the lessons we have learned, in the hope that they benefit others and shed light on our thinking.<\/p>\n

Governance\u00a0as a foundation for compliance<\/h3>\n

Microsoft\u2019s approach, which is based on our\u00a0AI principles, is focused on proactively establishing guardrails for AI systems so that we can make sure that their risks are anticipated and mitigated, and their benefits are maximized.<\/p>\n

Our responsible AI\u00a0governance model\u00a0borrows from what we have learned worked to successfully\u00a0integrate privacy, security and accessibility into our products and services.<\/p>\n

Centrally, we have three teams working together to set a consistent bar for responsible AI across the company:<\/p>\n

The Aether Committee, whose working groups leverage top scientific and engineering talent to provide subject-matter expertise on the state-of-the-art and emerging trends
\nThe Office of Responsible AI, which sets our policies and governance processes \u2014 and
\nThe Responsible AI Strategy in Engineering (RAISE) group, which enables our engineering groups to implement our responsible AI processes through systems and tools.<\/p>\n

We have also come to rely heavily on our Responsible AI Champs, who sit in engineering and sales teams across the company. They\u00a0raise awareness about \u00a0Microsoft\u2019s approach to responsible AI and cultivate a culture of responsible innovation in their teams.<\/p>\n

Developing rules to enact our\u00a0principles<\/h3>\n

Our Responsible AI Standard sets out the requirements that teams\u00a0building\u00a0AI systems must follow. It has been an iterative process to prepare the Standard, working closely with our engineering and sales teams to learn what works and what does not. Over time, we will build out a\u00a0set\u00a0of implementation methods that teams can draw upon to meet each of the requirements of the Standard. We expect this to be a\u00a0cross-company,\u00a0multi-year effort and one of the most critical elements for operationalizing responsible AI across the company.<\/p>\n

Drawing\u00a0red lines and working\u00a0through the\u00a0grey\u00a0areas<\/h3>\n

In the fast-moving and nuanced practice of responsible AI, it is impossible to reduce\u00a0all\u00a0the complex sociotechnical considerations into an exhaustive\u00a0set\u00a0of\u00a0pre-defined rules.<\/p>\n

Our sensitive uses review process has helped us navigate the grey areas that are inevitably encountered and has lead in some cases to new red lines, when we declined opportunities\u00a0to build\u00a0and deploy specific AI applications\u00a0because we were not confident that we could do so in a way that upheld our principles.<\/p>\n

For example, our sensitive uses review process helped us determined that\u00a0a local California police department\u2019s\u00a0real-time\u00a0use\u00a0of facial recognition on\u00a0body-worn\u00a0cameras and dash cams\u00a0in patrol scenarios\u00a0was premature. As a result, we turned down the deal. That sensitive uses review process also helped us to form the view that there needed to be a\u00a0societal conversation around the\u00a0use of facial recognition, and laws needed to be established. Thus, a redline was drawn for this use case and Microsoft called for governments to regulate facial recognition in 2018.<\/p>\n

Evolving our mindset and asking hard questions<\/h3>\n

Another key lesson we have learned is the importance for all of our employees to think deeply about and account for sociotechnical\u00a0impacts of the technology they are building. That is why we have developed company-wide training and practices to help our teams build the muscle of\u00a0asking ground-zero questions, such as,\u00a0\u201cWhy are we building this\u00a0AI system?\u201d and,\u00a0\u201cIs the AI\u00a0technology\u00a0at the\u00a0core\u00a0of this system ready for this application?\u201d.<\/p>\n

Some of our teams have experienced galvanizing moments that\u00a0accelerated progress, such as triaging a customer report of an AI system behaving in an\u00a0unacceptable way.\u00a0We have also seen teams wonder whether\u00a0being \u201cresponsible\u201d will be limiting, only to realize later\u00a0that a human-centered approach to AI\u00a0results in not just a responsible product, but a better product overall.<\/p>\n

Pioneering new engineering practices<\/h3>\n

Privacy, and the\u00a0GDPR experience\u00a0in particular,\u00a0taught us the importance of engineered systems and tools for enacting a new initiative at scale and\u00a0ensuring that key considerations are baked in by design.<\/p>\n

Although tooling \u2014 particularly in its\u00a0most technical sense \u2014 is not\u00a0capable of\u00a0the\u00a0deep, human-centered thinking work that needs to be undertaken while conceiving AI systems,\u00a0we think it is important to develop repeatable tools, patterns, and practices\u00a0where possible\u00a0to drive consistency and so\u00a0that the creative thought of our engineering teams can be directed toward\u00a0the most novel and unique challenges.<\/p>\n

In recognition of this need, we are embarking on an initiative\u00a0to build out the \u201cpaved road\u201d for responsible AI at Microsoft \u2014 the set of\u00a0tools, patterns and practices\u00a0that help\u00a0teams\u00a0easily\u00a0integrate\u00a0responsible AI\u00a0requirements\u00a0into their everyday development practices.<\/p>\n

Sharing our efforts to develop AI responsibly<\/h3>\n

We are acutely aware that, as the adoption of AI technologies accelerates, new and complex ethical challenges will arise.\u00a0While we recognize that we do not have all the answers, the building blocks of our approach to responsible AI at Microsoft are designed to help us stay ahead of these challenges and enact a deliberate and principled approach. We are committed to sharing what we learn and working closely with customers and partners to make sure we all understand how build and use AI responsibly.<\/p>\n

Along those lines, on April 27th and 28th Microsoft chief digital officer, Andrew Wilson, will address Adobe Summit attendees in his session: How to Delight Customers and Increase Market Share Through AI. We look forward to continuing the discussion and learning together.<\/p>\n<\/div>\n

Source : Adobe<\/p>\n","protected":false},"excerpt":{"rendered":"

The pace at which artificial intelligence (AI) is advancing is remarkable. The pace at which artificial intelligence (AI) is advancing is remarkable. As we look out at the next few years for this field, one thing is clear: AI will be celebrated for its benefits but also scrutinized and, to some degree, feared. It remains …<\/p>\n","protected":false},"author":1,"featured_media":28562,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[22],"tags":[],"_links":{"self":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts\/28560"}],"collection":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/comments?post=28560"}],"version-history":[{"count":1,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts\/28560\/revisions"}],"predecessor-version":[{"id":28563,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/posts\/28560\/revisions\/28563"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/media\/28562"}],"wp:attachment":[{"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/media?parent=28560"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/categories?post=28560"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.thepicpedia.com\/wp-json\/wp\/v2\/tags?post=28560"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}