Law firms are building A.I. expertise as regulation looms

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

A.I. regulation is coming. The European Union’s proposed Artificial Intelligence Act and the U.S. Federal Trade Commission’s bluntly-worded advisory, both of which I wrote about in this newsletter two weeks ago, are just the first stakes in the ground in what will likely be a global effort to construct fences and guardrails around uses of the technology.

And where there are laws, there will be lawyers. A number of large law firms have begun developing practice specialties around artificial intelligence. But so far, few of these lawyers have a deep understanding of machine learning, software engineering, or data science. That’s what inspired Patrick Hall, a George Washington University data scientist, and Andrew Burt, a visiting fellow at Yale University Law School’s Information Society Project and chief legal officer at data privacy company Immuta, to start a new kind of law firm: Burt & Hall LLP, which, in a buzzy bit of techno-marketing, does business as BNH.ai.

What makes BNH.ai unique is that it combines deep expertise in data science and machine learning with deep expertise in law and regulation, especially around data privacy. Burt and Hall founded the firm in Washington, D.C., because it is one of the few places in the U.S. where non-lawyers are allowed to be equal equity partners in a law firm. Although only founded just before the coronavirus pandemic struck, the firm already has a growing roster of clients that includes some of largest U.S. technology companies, as well as companies in financial services, insurance, and healthcare.

“We are able to get extremely hands on and, in certain cases, we are even writing code to correct models or make new models and that sets us apart from most firms out there,” Hall tells me. Hall says, going forward, there’s likely to be more law firms and legal departments that will combine machine learning and legal expertise, given the complexity of both fields. The problem, he says, is that today there is little overlap between people in the two spheres, and they speak completely different languages.

Hall was previously a senior data scientist at a number of enterprise software companies. He says the very last question many teams would ask when building an A.I. application is, “Is it legal?” It should be among the first questions asked, he says, and it needs to be answered by someone with legal expertise as well as an understanding of how the particular model or algorithm works.

Just because A.I. is an emerging area of law doesn’t mean there aren’t plenty of ways companies can land in legal hot water today using the technology. He says this is particularly true if an algorithm winds up discriminating against people based on race, sex, religion, age, or ability. “It’s astounding to me the extent to which A.I. is already regulated and people are operating in gleeful bliss and ignorance,” he says.

Most companies have been lucky so far—enforcement agencies have generally had too many other priorities to take too hard a look at more subtle cases of algorithmic discrimination, such as a chat bot that might steer certain white customers and Black customers to different car insurance deals, Hall says. But he thinks that is about to change—and that many businesses are in for a rude awakening.

Working with Georgetown University’s Centre for Security and Emerging Technology and Partnership on A.I., Hall was among the researchers who have helped document 1,200 publicly reported cases of A.I. “system failures” in just the past three years. The consequences have ranged from people being killed (in the infamous case of Uber’s self-driving car striking a pedestrian in Arizona) to false arrests based on facial recognition systems misidentifying people to individuals being excluded from job interviews.   

He thinks that data scientists and machine learning engineers need to adopt a mindset more similar to that of civil or aerospace engineers or cybersecurity experts, or, for that matter, lawyers: the “adversarial” point of view that assumes all systems are fallible and that knowing those exact points of failure and the consequences of that failure are vital. People building A.I. should assume other people will try to game the system, abuse it, or fool it in bizarre ways. Too often today, he says, machine learning teams are rewarded for building systems that perform well on average, or beat a benchmark tests, even if those systems are vulnerable to catastrophic failures when presented with unusual “corner” cases.

Hall says he welcomes the FTC’s recent signals that it plans to crack down on companies if their A.I. systems discriminate or if a company uses deceptive or misleading practices to either gather data to train an A.I., or in their marketing of their A.I. software. “I think that will shake up the future of machine learning in the U.S. and I think that is a good thing,” he says. “There’s a lot of snake oil and sloppiness that hurts people today.”

Many businesspeople don’t love lawyers. But this may be a case where we should all be grateful to see the suits and briefcases knocking on the door.

Here’s the rest of this week’s A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

***

Before you read on, a quick plug for The Fortune Global Forum, coming up June 8-9. As business and society emerge from the most disruptive period in modern history, there are many leadership lessons learned that provide us with hope for building a better future: The importance of cooperation and collaboration to tackle massive global challenges. The need for authentic corporate purpose in building lasting, thriving organizations. And the essentiality of focusing on the well-being of employees and the communities that they live in. The 2021 Fortune Global Forum will draw on the experiences of attendees to hear about how leadership has changed as a result of the extraordinary events of the last year.

Join us and our sponsors, McKinsey & Company, Salesforce, and The Project Management Institute (PMI), as we hear from many of today’s top business leaders on everything from digital transformation to tackling climate change to the future of the NFL. Find out more here.

A.I. IN THE NEWS

Autonomous truck companies are rushing to go public. PlusAI, a self-driving truck company based in Silicon Valley but with substantial R&D in China too, is planning to go public through a merger with special purpose acquisition company (SPAC) Hennessy Capital Investment Corp. V in a deal that would bring in $500 million in new capital and value the company at about $3.3 billion, The Wall Street Journal reports. The paper notes that this is the second self-driving truck business to reach the U.S. public markets after rival TuSimple Holdings had an IPO in April, which raised about $1 billion and valued that company at nearly $8.5 billion, and that it is likely that other autonomous trucking companies will also follow Plus and TuSimple on to public exchanges.

The U.S. Postal Service is using A.I. to speed up deliveries. That's according to a story in tech publication The Register. USPS has an A.I. project called the Edge Computing Infrastructure Program (ECIP, pronounced EE-sip) that uses machine learning models run on Nvidia hardware to perform image recognition tasks. This replaces custom-built character recognition systems that could read addresses and mailing labels. The new system can handle this as well as bar codes, and it is better at deciphering addresses even if they are smudged or damaged in some way. 

Concerns are mounting over PimEyes, a facial recognition system that anyone can use to find photos of anyone on the Internet. My Fortune colleague Aaron Pressman previously wrote about this mysterious facial recognition service which offers to help users find any and all photos of themselves that exist on the Internet. The company says it is intended for people to search only for photos of themselves, but there is no mechanism to prevent people from using it to find photos of anyone, which has rung alarm bells among data privacy experts, according to a CNN report. One concern is that people, including possibly law enforcement or prospective employers, will use the service to gather information about people. (The service offers some limited functionality for free and then more advanced features to those who pay $29.99 per month, or businesses that pay $299.99 per month to conduct multiple searches.) Another is exactly how PimEyes got a hold of enough images to train such a good facial search engine. The company has said it did not scrape social media sites, a method that violates the terms of service for many social media companies and which got another controversial facial recognition company, Clearview, into trouble. A final mystery is who exactly is behind the company, which began life as a Polish startup but claims to have been under new ownership since 2020 and is registered in the Seychelles. On this, the CNN report provided little new light—other than to note that the time stamp on some e-mail correspondence with the company seemed to indicate the people answering the emails were in the same time zone as Poland and other Eastern European countries.

Minority affinity groups for A.I. researchers say they'll spurn Google funding in protest over how the company treated Timnit Gebru, Margaret Mitchell and Christina Curley. Google continues to suffer fallout from the way it forced the co-heads of its A.I. Ethics research team from the company and for its treatment of other minority staff members, including Curley, a former Google recruiter who is a Black woman who identifies as queer. In a joint statement released earlier this week, the groups Black in AI, Queer in AI, and Widening NLP, which all push for more diversity and minority representation in A.I. research, said they would no longer accept any sponsorship money or funding from the search giant, according to a story in Wired. The groups also called on academic conferences to reject funding from Google and for policymakers to adopt stronger whistleblower protections for A.I. researchers working in companies.

EYE ON A.I. TALENT

Healx, a Cambridge, England-based startup that is using A.I. methods to search for treatments for rare diseases, has appointed Andrew Watson as its vice president of artificial intelligence, the company announced. Watson was previously the director of machine learning at consumer products company Dyson. Prior to that, he also led technical teams for GCHQ, the U.K. government's signals intelligence agency.

Alteryx, a company that helps business implement automated analytics, including machine learning, has appointed Paula Hansen as chief revenue officer, the company said. Hansen was previously chief revenue officer at enterprise software giant SAP Customer Experience and before that was a long-time executive at Cisco

Deep Genomics, a Toronto-based startup that uses A.I. to analyze genomic sequences and search for possible new drug therapies, announced Jeffrey Brown will be its new vice president, pre-clinical research, Brown was previously with Voyager Therapeutics and Wave Life Sciences.

Guard Dog Solutions Inc., a Salt Lake City-based network security company that does business as guardDog.ai, has appointed Rick Wickham as senior vice president, ecosystems, the company said in a statement. Wickham previously held a variety of sales, business development, and legal-related roles at Microsoft.

EYE ON A.I. RESEARCH

Should A.I. systems be allowed to consider race in order to mitigate racial disparities? And what if you build an algorithm to help a disadvantaged group but that group doesn't trust your tech enough to actually use it? Those are some of the key questions raised in a recent study by four researchers from Harvard University, Carnegie Mellon University, and the University of Toronto that examined Airbnb's experience with a smart pricing algorithm between 2015 and 2017. The study was published in late March in the Social Science Research Network's (SSRN) working papers series.

Airbnb's smart pricing algorithm was supposed to help rental hosts figure out what to charge for their properties in order to maximize their earnings based on a wide variety of variables that the property marketplace takes in, including location, seasonal market demand and amenities. One factor that the Airbnb smart pricing system did not look at was race. But it turns out that a host's race makes a big difference to his or her potential revenues. On average, before using the algorithm, Black hosts earned $12.16 per day less than white hosts for equivalent properties. The researchers found that this disparity was the result of differences in occupancy rates: there was 20% less demand for properties run by Black hosts.

Adopting the smart algorithm though was a big help to Black hosts. By lowering prices slightly (on average 5.7%), all hosts improved their revenues by an average of 8.6%. But Black hosts saw an even bigger uptick in revenue--seeing their daily earnings more than double when they adopted the smart pricing system. In fact, it closed 71% of the gap between white hosts and Black hosts. So, for all hosts, the A.I. pricing system was a good thing, but for Black hosts it was a great thing. And this was true even though the smart pricing algorithm did not take the race of the host into account and therefore, as the researcher note, may still have set prices for Black hosts higher than they should have been to maximize their earnings. But, as the researchers also note, creating an algorithm that explicitly looked at the hosts' race and set different prices accordingly for otherwise equivalent properties would be illegal in the U.S.

Now, here's the rub. Far fewer Black hosts decided to use the smart pricing algorithm than white hosts. In fact, Black hosts were 41% less likely to adopt the A.I. system than white ones, with the end result being that after introducing the smart pricing system, Airbnb saw the overall earnings gap between white and Black hosts actually widen significantly. One way to possibly mitigate this effect would be if Airbnb did explicitly consider the race of its hosts and did more to encourage Black hosts to trust and use the smart pricing system. And it would be even better if Airbnb's smart pricing algorithm also took race into consideration when suggesting pricing levels. But, of course, Airbnb can't do that under current law. As the researchers wrote:

"For policy makers, our study shows that when racial biases exist in the marketplace, an algorithm that ignores those biases may not do a good job in reducing racial disparities. Hence, regulators should consider allowing algorithm designers to incorporate racial differences in their algorithms if they can demonstrate that by doing so they can reduce racial disparities. This recommendation, although highly sensitive and currently illegal, is in alignment with the emerging literature on fairness of ML algorithms...For managers, our results show a much lower rate of adoption of the algorithm amongst black hosts as compared to their white counterparts. This is consistent with the literature that has found low rates of adoption of new technologies amongst African-Americans (Mossberger et al. 2006). Thus managers should devise strategies to  encourage black hosts to adopt the algorithm. Otherwise, an algorithm that could reduce disparities may end up increasing them."

FORTUNE ON A.I.

The pandemic pressed pause on A.I. investment—by Alan Murray and Katherine Dunn

New European regulations require tech companies to prove A.I. is trustworthy—by Fortune Editors

Why rivals Microsoft, Google, and IBM are teaming up on a big cloud project—by Jonathan Vanian

This is how the global chip shortage will—by Eamon Barrett

BRAIN FOOD

Why can't we all just get along? There's been far too much emphasis on creating A.I. systems that can equal or better human performance at various tasks. That's been for a couple of reasons: if we are building intelligent machines, then it makes sense to benchmark them against the intelligence we already know best—our own. Another reason is that A.I. has often been perceived as an individual system or entity taking on the world, as authors of a recent op-ed in Nature highlighted. Yet another reason has been that, despite protestations to the contrary, a lot of what excites businesses about using A.I. is ultimately labor cost savings. But there's a lot of evidence that even better outcomes can be achieved when humans and A.I. systems assist one another, and, as A.I. systems become more capable, figuring out how best to cooperate with them, and teaching them to cooperate with people, ought to be critical. Or, at least, that's the argument of a group of A.I. researchers from DeepMind, the London-based A.I. company that is owned by Google parent Alphabet, Microsoft, the University of Oxford, and the University of Toronto. They've come together to form the Cooperative AI Foundation, with $15 million in backing from the Swiss-based Center for Emerging Risk Research, a charity whose board members include Rauiri Donelly, who has been involved in crypto currency trading, and several other people affiliated with the "effective altruism" movement.

In announcing the new coalition in that Nature op-ed, the researchers write:

"AI needs social understanding and cooperative intelligence to integrate well into society. The coming years might give rise to diverse ecologies of AI systems that interact in rapid and complex ways with each other and with humans: on pavements and roads, in consumer and financial markets, in e-mail communication and social media, in cybersecurity and physical security. Autonomous vehicles or smart cities that do not engage well with humans will fail to deliver their benefits, and might even disrupt stable human relationships. We need to build a science of cooperative AI. As researchers in the field and its governance, we argue that it is time to prioritize the development of cooperative intelligence that has the ability to promote mutually beneficial joint action, even when incentives are not fully aligned. Just as psychologists studying humans have found that the infant brain does not develop fully without social interaction, progress towards socially valuable AI will be stunted unless we put the problem of cooperation at the centre of our research."

The researchers say there are several types of cooperation that ought to be studied, including A.I. software collaborating with humans as well as working alongside and with other A.I. software, and A.I. that can help humans collaborate and cooperate with other humans. And they propose that the field develop a set of benchmarks, perhaps based initially around games that require players to cooperate, as a way of pushing towards cooperative A.I.

"Cooperative AI research will similarly gain momentum if investigators can devise, agree on and adopt benchmarks that cover a diverse set of challenges: playing cooperative board games, integrating into massive multiplayer video games, navigating simplified environments that require machine–human interaction, and anticipating tasks as a personal assistant might. Similar to the state-of-the-art in language modelling, considerable effort and creativity will be needed to make sure these benchmarks remain sufficiently rich and ambitious, and do not have socially harmful blind spots."

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.