Microsoft has laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs which affected 10,000 employees across the company, Platform has learned.
The move leaves Microsoft without a dedicated team to ensure its AI principles are tightly tied to product design at a time when the company is leading the charge to bring AI tools to the mainstream, said current and former employees.
Microsoft still maintains an asset Responsible AI Office, which is responsible for creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in accountability work is growing despite recent layoffs.
“Microsoft is committed to developing AI products and experiences in a safe and responsible way, and does so by investing in people, processes and partnerships that put them first,” the company said in a statement. “Over the past six years, we’ve grown the number of people on our product teams and in the Responsible AI Office who, along with all of us at Microsoft, are accountable for putting our principles into practice. of AI. […] We appreciate the pioneering work that Ethics & Society has done to help us on our continued journey towards responsible AI.
But employees said the ethics and society team has played a critical role in ensuring that the company’s responsible AI principles are actually reflected in the design of products being shipped.
“Our job was to … create rules in areas where there were none.”
“People would look at the principles coming out of the responsible AI office and say, ‘I don’t know how that applies,’ says a former employee. “Our job was to show them and create rules in areas where there were none.”
In recent years, the team has designed a role-playing game called Judgment Call that has helped designers imagine the potential damage that can result from AI and discuss it during product development. He was part of a bigger “responsible innovation toolkit” that the team released publicly.
More recently, the team has worked to identify the risks posed by Microsoft’s adoption of OpenAI technology in its product suite.
The ethics and society team was at its peak in 2020, when it had around 30 employees, including engineers, designers and philosophers. In October, the team was reduced to around seven people as part of a reorganization.
In a meeting with the team after the reorganization, John Montgomery, vice president of AI, told employees that company leaders had asked them to act quickly. “The pressure of [CTO] Kevin [Scott] And [CEO] Satya [Nadella] is very, very high to take these newer OpenAI models and those that follow them and get them into the hands of customers at very high speeds,” he said, according to meeting audio obtained by Platform.
Because of this pressure, Montgomery said, much of the team was going to be moved to other areas of the organization.
Some team members pushed back. “I’m going to be bold enough to ask you to please reconsider this decision,” an employee said on the call. “While I understand there are business issues involved… what this team has always been deeply concerned about is our impact on society and the negative impacts we have had. And they are significant.
Montgomery refused. “Can I reconsider? I don’t think I will,” he said. “Because unfortunately the pressures remain the same. You don’t have the perspective that I have, and you can probably be grateful for that. There’s a lot that gets ground into the sausage.
In response to questions, however, Montgomery said the team would not be eliminated.
“It’s not that it’s going away, it’s that it’s evolving,” he said. “This is moving towards more energy within the individual product teams creating the services and software, which means the central hub that has done some of the work is delegating its capabilities and responsibilities.”
Most of the team members have moved elsewhere within Microsoft. Afterwards, the remaining members of the Ethics and Society team said the reduced crew made it difficult to implement their ambitious plans.
Decision Leaves Fundamental Void in Holistic AI Product Design, Employee Says
About five months later, on March 6, the remaining employees were invited to join a Zoom call at 11:30 a.m. PT to hear a “business-critical update” from Montgomery. During the meeting, they were told that their team was finally eliminated.
An employee says the move leaves a fundamental void in the user experience and holistic design of AI products. “The worst thing is that we put the company at risk and human beings at risk by doing this,” they explained.
The dispute underscores an ongoing strain for tech giants creating divisions dedicated to making their products more socially responsible. At best, they help product teams anticipate potential misuses of technology and resolve issues before they ship.
But they also have the job of saying ‘no’ or ‘slowing down’ within organizations that often don’t want to hear it – or laying out risks that could lead to legal headaches for the business. they surfaced in a legal discovery. And the resulting friction sometimes spills over into the public arena.
In 2020, Google fired ethical AI researcher Timnit Gebru after publishing an article critical of the great language models that would explode in popularity two years later. The resulting fury led to the departures of several other senior executives within the departmentand diminished the company’s credibility on responsible AI issues.
Microsoft focused on delivering AI tools faster than rivals
Members of the ethics and society team said they generally try to support product development. But they said that as Microsoft focused on delivering AI tools faster than rivals, the company’s management was less interested in the type of long-term thinking in which the team s was specialized.
This is a dynamic that deserves close examination. On the one hand, Microsoft may now have a once-in-a-generation chance to gain significant traction against Google in search, productivity software, cloud computing and other areas where the giants compete. When it relaunched Bing with AI, the company told investors that every 1% market share it could take away from Google in search would generate $2 billion in annual revenue.
This potential explains why Microsoft has so far invested $11 billion in OpenAIand is currently race to integrate the startup’s technology into every corner of its empire. It seems to have some early success: the company said last week that Bing now has 100 million daily active usersa third of which are new since the search engine was relaunched with OpenAI technology.
On the other hand, everyone involved in the development of AI agrees that the technology presents powerful and possibly existential risks, known and unknown. The tech giants have been careful to signal that they take these risks seriously – Microsoft alone has three different groups working on the issue, even after the removal of the ethics and society team. But given the stakes, any reduction in teams focused on responsible work seems noteworthy.
The elimination of the ethics and society team came just as the group’s remaining employees had focused on their biggest challenge yet: anticipating what would happen when Microsoft releases OpenAI-powered tools to an audience. global.
Last year, the team drafted a memo detailing the brand risks associated with Bing Image Creator, which uses OpenAI’s DALL-E system to create images based on text prompts. The picture tool launched in a handful of countries in Octobermaking it one of Microsoft’s first public collaborations with OpenAI.
While text-to-image technology has proven hugely popular, Microsoft researchers correctly predicted that it could also threaten artists’ livelihoods by allowing anyone to easily copy their style.
“While testing Bing Image Creator, it was discovered that with a simple prompt including only the name of the artist and a medium (painting, print, photography or sculpture), the generated images were almost indistinguishable from the original works” , the researchers wrote in the memo.
“The risk of trademark infringement…is real and significant enough to warrant redress.”
They added, “The risk of brand damage, both to the artist and its financial stakeholders, and the negative public relations to Microsoft resulting from artist complaints and negative public backlash are sufficiently real and important to demand redress before they harm the Microsoft brand.
Additionally, last year OpenAI updated its terms of service to give users “full ownership rights to the images you create with DALL-E.” This decision has Microsoft’s ethics and society team concerned.
“If an AI image generator mathematically reproduces images of works, it is ethically suspect to suggest that the person who submitted the invite has full ownership of the resulting image,” they wrote in the memo.
Microsoft researchers have created a list of mitigation strategies, including preventing Bing Image Creator users from using the names of living artists as prompts and creating a marketplace to sell the work of an artist who is put prominently if someone looked up his name.
Employees say none of these strategies were implemented and Bing Image Creator launched in test countries anyway.
Microsoft says the tool was tweaked before launch to address concerns raised in the document and prompted its responsible AI team to do more work.
But legal questions over the technology remain unresolved. In February 2023, Getty Images has filed a lawsuit against Stability AI, creators of the Stable Diffusion AI art generator. Getty accused the artificial intelligence startup of improperly using more than 12 million images to train its system.
The accusations echoed concerns raised by Microsoft’s own AI ethicists. “It is likely that few artists have consented to their works being used as training data, and likely that many are still unaware of how generative technology makes it possible to produce online image variations of their work in seconds” , employees wrote last year.