Power and prediction: U of T's Avi Goldfarb on the disruptive economics of artificial intelligence
In the new book , co-author argues we live in the “Between Times”: after discovering the potential of AI, but before its widespread adoption.
Delays in implementation are an essential part of any technology with the power to truly reshape society, says Goldfarb, a professor of marketing and the Rotman Chair in Artificial Intelligence and Healthcare at the ߲ݴý's Rotman School of Management and research lead at the Schwartz Reisman Institute for Technology and Society.
He makes the case for how AI innovation will evolve in Power and Prediction, his latest book co-authored with fellow Rotman professors Ajay Agrawal and Joshua Gans. The trio, who also wrote 2018’s , are the co-founders of the , a non-profit organization that helps science- and technology-based startups scale.
Goldfarb will give a talk at the Rotman School of Management as part of the SRI Seminar Series. He spoke with the Schwartz Reisman Institute to discuss how the evolution of AI innovation will require systems-level changes to the ways that organizations make decisions.
(The interview has been condensed for length and clarity.)
What changed in your understanding of the landscape of AI innovation since your last book?
We wrote Prediction Machines thinking that a revolution was about to happen, and we saw that revolution happening at a handful of companies like Google, Amazon and others. But when it came to most businesses we interacted with, by 2021 we started to feel a sense of disappointment. Yes, there was all this potential, but it hadn’t affected their bottom line yet – the uses that they’d found had been incremental, rather than transformational. And that got us trying to understand what went wrong.
One potential thing that could have gone wrong, of course, was that AI wasn’t as exciting as we thought. Another was that the technology was potentially as big a deal as the major revolutions of the past 200 years – innovations like steam, electricity, computing – and the issue was system-level implementation. For every major technological innovation, it took a long time to figure out how to make that change affect society at scale.
The core idea of Power and Prediction is that AI is an exciting technology – but it’s going to take time to see its effects, because a lot of complementary innovation has to happen as well. Now, some might respond that’s not very helpful, because we don’t want to wait. And part of our agenda in the book is to accelerate the timeline of this innovation from 40 years to 10, or even less. To get there, we then need to think through what this innovation is going to look like. We can’t just say it’s going to take time – that’s not constructive.
What sort of changes are needed for organizations to harness AI’s full potential?
Here, we lean on three key ideas. The first idea is that AI today is not artificial general intelligence (AGI) – it’s prediction technology. The second is that a prediction is useful because it helps you make decisions. A prediction without a decision is useless. So, what AI really does is allow you to unbundle the prediction from the rest of the decision, and that can lead to all sorts of transformation. Finally, the third key idea is that decisions don’t happen in isolation.
What prediction machines do is allow you to change who makes decisions and when those decisions are made. There are all sorts of examples of what seems like an automated decision, but what it actually does is take some human’s decision – typically at headquarters – and scales it. For organizations to succeed, they require a whole bunch of people working in concert. It’s not about one decision – it’s about decisions working together.
One example is health care – at the emergency department, there is somebody on triage, who gives a prediction about the severity of what’s going on. They might send a patient immediately for tests or ask them to wait. Right now, AIs are used in triage at SickKids in Toronto and other hospitals, and they are making it more effective. But to really take advantage of the prediction, they need to coordinate with the next step. If triage is sending people for a particular test more frequently, then there need to be other decisions made about staffing for those tests, and where to offer them. And, if your predictions are good enough, there’s an even different decision to be made – maybe you don’t even need the tests. If your prediction that somebody’s having a heart attack is good enough, you don’t need to send them for that extra test and waste that time or money. Instead, you’ll send them direct to treatment, and that requires coordination between what’s happening upstream on the triage side and what’s happening downstream in terms of the testing or treatment side.
AI is as exciting a technology as electricity and computing, but it will take time to see its effects, Avi Goldfarb says.
Will certain sectors have greater ease in adopting system-level changes than others?
There is a real opportunity here for startups because when building a new system from scratch, it’s often easier to start with nothing. You don’t have to convince people to come along with your changes, so it becomes a less political process – at least within your organization. If you’re trying to change a huge established company or organization, it’s going to be harder.
I’m very excited about the potential for AI and health care, but health care is complicated; there are so many different decision-makers. There are the patients, the payers – sometimes government, sometimes insurance companies, sometimes a combination of the above – and then there are doctors, who have certain interests, medical administrators who might have different interests, and nurses.
AI has potential to supercharge nurses, because a key distinction between a doctor and a nurse in terms of training is diagnosis, which is a prediction problem. If AI is helping with diagnosis, that has potential to make nurses more central to how we structure the system. But that’s going to require all sorts of changes, and we have to get used to that as patients. And so, while I think the 30-year vision for what health care could look like is extraordinary, the five-year timeline is really, really hard.
What are some of the other important barriers to AI adoption?
A lot of the challenges to AI adoption come from ambiguity about what’s allowed or not in terms of regulation. In health care contexts, we are seeing lots of people trying to identify incremental point solutions that don’t require regulatory approval. We may have an AI that can replace a human in some medical process, but to do it is going to be a 10-year, multibillion-dollar process to get approval – so they’ll implement it in an app that people can use at home with a warning that it’s not real medical advice.
The regulatory resistance to change, and the ambiguity of what’s allowed, is a real barrier. As we start thinking about system changes, there is an important role for government through legislation and regulation, as well as through its coordinating function as the country’s biggest buyer of stuff, to help push us toward new AI-based systems.
There are also real concerns about data and bias, especially in the short term. However, in the long run, I’m very optimistic about AI to help with discrimination and bias. While a lot of the resistance to AI implementation right now is coming from people who are worried about [people who will be negatively impacted by] bias [in the data], I think that pretty soon this will flip around.
There’s a story we discuss in the book, where Major League Baseball brought in a machine that could say whether a pitch was a strike or a ball, and the people who resisted it turned out to be the superstars. Why? Well, the best hitters tended to get favored by umpires and face smaller strike zones, and the best pitchers also tended to get favoured and had bigger strike zones. The superstars benefited from this human bias, and when they brought in a fairer system, the superstars got hurt. So, we should expect that people who currently benefit from bias are going to resist machine systems that can overcome it.
What do you look for to indicate where disruptions from AI innovation will occur?
We’re seeing this change already in a handful of industries tech is paying attention to, such as advertising. Advertising had a very Mad Men vibe until recently: there was a lot of seeming magic in terms of whether an ad worked, how to hire an agency and how the industry operated – a lot of charm and fancy dinners. That hasn’t completely gone away, but advertising is largely an algorithm-based industry now. The most powerful players are big tech companies – they’re no longer the historical publishers who worked on Madison Avenue. We’ve seen the disruption – it’s happened.
Think through the mission of any industry or a company. Once you understand the mission, think through all the ways that mission is compromised because of bad prediction. Once you see where the mission doesn’t align with the ways in which an organization is actually operating, those are going to be the cases where either the organization is going to need to disrupt themselves, or someone’s going to come along and do what they do better.