The Risks of AI in I&R – Part 1

Last month I presented at the 2024 Inform USA Grand Gathering in Grand Rapids Michigan, and my session – the Risks of AI in I&R – was probably the most popular I’ve delivered in 10 years of these conferences. So I’ll recap a summary here in a series of posts. (Here are the slides from my original presentation, with presenters’ notes.)

Our first order of business in this analysis is to define “AI” – which is a term often used in unhelpfully abstract ways. For our purposes, we can understand “AI” as algorithmic technologies.

Algorithms are sets of computable statements that basically boil down to:

“IF [this], THEN [that.]” 

The big idea behind “AI” is that, given a sufficiently large amount of data feeding a complex enough set of algorithms, these technologies supposedly become “intelligent” – as in, capable of performing tasks in ways that are comparable or superior to humans. 

Sounds magical! So what’s the problem?

(In sum: garbage in, garbage out.) 

It’s worth noting up front that algorithmic technologies are far from new. They’ve been all around us for decades. In fact, there is a long empirical record of algorithmic technologies being deployed in the field of human services – and overall, the record is not good.

In almost every examination I’ve found of the impact of algorithmic technologies on systems that connect people to resources like public benefits and human services, there are reports of common patterns of failure. Among them, the most significant and concerning failure pattern can be summarized like this: When institutions cede decision-making processes to machines, marginalized people tend to become further marginalized to their disadvantage. 

Algorithmic systems might be more “efficient” to the extent that they can process more data at a faster pace than humans; however, they also tend to yield ineffective and inequitable outcomes – especially for people who are already marginalized and discriminated against. (To reiterate: it’s not just that there are possible risks of bad outcomes; a look at the actual record of these technologies suggests that inequitable outcomes are likely, even a typical result when algorithmic technologies are used in social service contexts.)

As an audience member in my session at Grand Rapids observed, the outputs of an algorithmic technology is entirely dependent on its inputs, and these inputs may reflect various kinds of biases, which are then amplified by the algorithms in various ways. This includes biases of the people who produced the inputted data, biases of the algorithms designers, and biases in our broader social system that result in patterns of inequity which are detectable in data yet easily taken out of context.

And to make things worse, when decision-making processes that previously were conducted by humans are automated by algorithms, it becomes much harder for people to correct mistakes. 

I don’t expect anyone to just take my word for this, so I’ve included a range of examples in my slides, with citations in the presenter notes. For more extensive analysis of these empirical patterns of failure, I recommend Virginia Eubanks’ Automating Inequality, Ruha Benjamin’s Race After Technology, and Cathy O’Neil’s Weapons of Math Destruction. Just for starters!

What’s different about this moment?

(The more things change, the more they stay the same.) 

Now, those three books to which I just linked are about a decade or so old. (They came out back when common buzzwords were more “big data” and “deep learning” than “AI” per se; but it’s fundamentally the same stuff, and I think you’ll find that their premises still apply.) So if algorithmic technologies have such a long and notoriously bad track record in our fields, why are we suddenly talking about them so much right now? 

In large part, “AI” is fashionable right now because a handful of multi-billion dollar companies are marketing new user-facing products. These tools aren’t all that different from other familiar tools like Apple’s Siri or Amazon’s Alexa – but they feature some significant improvements. And they can can be a lot of fun! 

However, we still haven’t seen many useful applications of any of these tools in any professional setting, at least not in any context in which the reliability of the output matters (such as I&R). 

To be clear, I’m sincerely interested in finding any exceptions to these failure patterns. Indeed, in any conversation with someone excited about AI, the first question I ask: “What do you think is the most significant precedent for success?” 

I have two conditions for qualifying answers to this question: 1) the technology has successfully solved a social problem, and 2) an independent third party, with no financial stakes in the matter, is verifying the technology’s success. 

(By social problem, I mean a problem that involves people, relationships and institutions – with the conflicting interests and dilemmas that naturally emerge among them. Technical problems typically have correct solutions, which can be determined by an algorithm. Rocket science is a technical problem. A social problem involves tradeoffs between different values, and poses questions for which different people might offer different yet arguably valid answers. In many ways, social problems are harder than rocket science.)  

This is not a super scientific process, of course, but I think it’s notable that I’ve received almost no positive answers. I’m still looking for any independently verified examples of successful implementations of algorithmic technologies that have been unambiguously helpful to people in need. (If you know of any, please get in touch – I want to hear about it, and will report back here!)

Amid many risks: some opportunity

Stay tuned for more!

I do know of at least one (potential) exception to this pattern – in fact, it’s in the last post we featured here on our blog. There we heard from Neil McKechnie about his experiments using “large language models” (LLMs) to mechanically evaluate resource directory data at scale, and provide feedback to resource data specialists for the purpose of quality improvement.

It’s worth spelling out the reasons why I think this is a plausible example of a real opportunity for algorithmic technologies to benefit the I&R sector and the people and communities we serve. In this case:

  • The algorithmic technologies are not making inferences about individual people, but rather analyzing public information.
  • The algorithmic technologies are not making decisions, but rather providing feedback to responsible humans.
  • The algorithmic technologies are not making factual claims, but rather stylistic assessments based on clear, predetermined criteria.

There’s a lot more to be said about the reasoning here, but now I have to apologize to anyone reading this who wants this to be an even longer blog post than it already is: you’ll just have to wait for a future installment 😉 

Thanks for reading – and if you’re interested in discussing these topics, join Inform USA at the association’s Virtual Conference on August 8-9th, when I’ll host a participatory discussion in which we’ll consider the risks and opportunities together.

Comments

One response to “The Risks of AI in I&R – Part 1”

  1. […] This is the second post in a series about the Risks and Opportunities of AI in I&R. My first post considered the long, disappointing history of “algorithmic technologies” in human …  […]

Leave a Reply

Your email address will not be published. Required fields are marked *