DONATE

In a session from our 2024 Symposium, an expert panel explores the intersection of AI development and responsible governance, examining ethics, accountability, the role of regulatory frameworks, and the big questions we should consider as we build technologies to serve society. Moderator Kristan Uhlenbrock (Executive Director, Institute of Science & Policy) talked with Sayash Kapoor (Ph.D. candidate, Princeton University's Center for Information Technology Policy), Merve Hickok (Founder, AIethicist.org and President and Policy Director, Center for AI & Digital Policy), and Justin Hendrix (CEO and Editor, Tech Policy Press).  

Watch a video of the panel discussion on our YouTube channel.

--------------------------------------

 

Symposium 2024: The Future of Technology and AI Policy

KRISTAN UHLENBROCK: Hi, everyone. Welcome. Thank you so much to joining us for the annual Institute for Science and Policy Symposium. I'm Kristen Uhlenbrock, the Executive Director of the Institute for Science and Policy. I am thrilled that you are joining us today. This year's symposium is called The Future of Science Policy and I promise you, you're going to have a really thought-provoking day with all of the panels that we have queued up. 

So during our Symposium, we're going to be talking about three critical fields: AI and technology, public health and climate policy. We're going to talk about the role of institutions. We're going to examine the policy systems that exist and how are they responding to some of our current challenges. We are going to hear speakers talk about innovations and breakthroughs and exciting opportunities that they see happening in the future. 

For those of you who are curious, we were quite intentional in choosing this topic, The Future of Science Policy, for this year's annual Symposium. We knew it was going to be election year when we started planning. And we wanted a moment where we could take stock and assess where we've been, where we are today, and where we might be headed. 

We hope that folks get a chance to dream big today and imagine the future: All the good and all the bad that goes with it. But as for our audience, we hope you'll find some moments of inspiration today. And that you'll also get a chance to think a little bit more critically about these issues, that you leave hopeful, and that you get some ideas about the future and how all of this applies to your own lives or to your own work.  

With that, I'm thrilled to welcome and introduce you to our first panel. Our first session today is The Future of AI Policy. We're going to be talking about AI development, responsible governance and other aspects of today's topic. Everything from ethics to accountability to regulatory frameworks to other big ideas. We have three amazing guests who are part of this discussion today. Let me briefly introduce them. 

We have Justin Hendrix. Justin is the CEO and editor of Tech Policy Press. It's a new nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was executive director of the NYC Media Lab, and he spent over a decade at the Economist. He's also an associate research assistant scientist and adjunct professor at NYC's Tatum School of Engineering. Justin, thank you for being with us today.  

JUSTIN HENDRIX: Thank you.  

KRISTAN: Great. We also have Merve Hickok. Merve is the founder of  AIethicist.org, as well as the president and policy director of the Center for AI and Digital Policy. She's globally renowned. She's actually not even in the U. S. today. She's out traveling the world talking about AI. She's an award winning expert on AI policy, ethics, and governance, and is deeply engaged with policy and regulatory frameworks, both in the U. S. and internationally. You can see her and her work quoted in major news outlets. And I read that, last year, she received a lifetime achievement award for women in AI. So congrats and thanks for being here, Merve.   

MERVE HICKOK: Thank you so much for the invite.   

KRISTAN: Wonderful. Also joining us on our panel today is Sayash Kapoor. Sayash is a Lawrence S. Rockefeller Graduate Prize Fellow in the University Center for Human Values and a computer science PhD candidate at Princeton University's Center for Innovation Technology Policy. That was a little bit of a mouthful, Sayash, but I think I got it all right. He's the co-author of a brand new book called AI Snake Oil. I encourage you to check it out. I ordered myself a copy but it hasn’t arrived yet. So I admit I have not yet read it. But I'm excited for it. It provides a critical analysis of artificial intelligence and helps separate out the hype from the true advances. Sayash was also named one of TIME Magazine's inaugural “100 Most Influential People in AI.” Thank you for being here as well.  

SAYASH KAPOOR: Thank you so much for the invitation.  

KRISTAN: Wonderful. Okay, audience. We're going to jump right in. This will be a facilitated conversation, and I'm excited to hear what our speakers have to say today. I want to open this up.  I'm going to have everyone provide their own perspective to kick off the conversation. When it comes to this topic – AI technology, policy and governance – what's the one big thing that's keeping you up at night?  

JUSTIN: One big thing. Wow. Difficult question. I think the one big thing that keeps me up at night on this is the extent to which the conversation is driven by and dominated by corporate interests. This is a primarily capitalist venture, to build artificial intelligence and to propagate it across the economy and across society. Right now, I think in most parts of the world, it's badly out of whack – between the corporate interest, the capitalist interest and the public interest. So we can get into more of the reasons why I think that. And I'll be interested to hear what my fellow panelists think. But to me, that's the key concern. One of the other things I'm thinking about is the extent to which AI is being sold to us by those corporate interests as the fourth Industrial Revolution....as this opportunity to leapfrog into a different place for society, into a different economy. I'm not certain we have the evidence to suggest that that's true yet.  That this is innovation that will change the way we live, the way we work, the way we interact with one another in the same way that prior innovations have. You think about things like electricity or the car. When it comes to artificial intelligence, we’ll see if it ultimately has that kind of impact. So I'm thinking about corporate narratives impacting society and thinking about the public interest.  

KRISTAN: Merve, why don't you share the thing that keeps you up at night? And then Sayash too.     

MERVE: It feels like Justin and I could drink together while the same things are keeping us at night. I think, for me, it’s the widening gap between tech companies, academic labs, and the public sector in terms of resources and know-how. And if you fast forward, what that means for our ability to develop policies or research for the public interest. For our overall ability to hold actors accountable. And also for our ability to protect and advance civil rights. The gap between resources and know-how is really critical, especially when it comes to the public sector. I’m more looking at like how AI is embedded into public sector infrastructure. What kind of questions do we ask? Do we even ask questions? Do we know how to do due diligence for AI after bringing it into the public sector? Believe it or not, the United States federal government has more than 1,200 AI systems in place in its civilian  agencies.  But we don't have AI-specific due diligence or  procurement guidelines. So AI systems are already in the infrastructure and we don't have ways to get them out, even when they start harming civil rights.   

SAYASH: Yeah, I will echo what Justin and Merve have just said. The thing that is interesting but also keeps me up at night is that, on the one hand, we have tech companies telling us that AI is powerful enough to kill us all. There are concerns that AI is the next extinction threat to humanity, that it is so powerful we can’t control it. We don't know that things won't get out of hand with a technology as powerful as this. And on the other hand, there’s this extreme concentration of know-how and resources that enable us understand these tools, their safety risks, how they might go wrong, how to steer them in the public's interest. So I think it’s about that balance between stated risks and the ways know-how is spread. Because, ultimately, if a technology is that powerful, we would want some deliberative democratic process to govern it. We wouldn't just want OpenAI to make decisions about how to release that technology. So I think that's one part, how we should govern it. 

Then, on the other hand, there’s a second consequence of this concentration of resources and technical capabilities to assess these systems. Essentially, when companies make these claims, governments are likely to take them at face value when they make decisions about how to regulate technologies. 

And that's because of the gap Merve just mentioned – between the technical labs that are building these products and the academic labs that could counterbalance and fact check their claims. In our own research, we found that, like most of the claims that people and companies have made about extinction risks to humanity, they’re quite likely to be overblown 

There's lots of evidence that show that we are far from a point where AI systems become that competent. And yet, that sort of research can get very quickly overlooked when you have the CEO of an OpenAI-like company telling the Senate that we need to license the capabilities for building these tools. Which, in my view, is just an anti-competitive move that further exacerbates the concentration of who can access this technology and who can't. It's the one-two punch of not having the technical capacity and these attempts at regulatory capture that concerns me a lot.   

KRISTAN: Thank you all. So those were a lot of concerns. I feel like we're going to have to come back and tackle each of them from the perspective of how to work through them. I want to do a little bit more stage-setting for our audience today. And I want to bring in some aspects of history and how we can look back in order to grapple with where we are today. Are there any parallels that you all might look to – whether with a technological advancement or otherwise – where we have seen something rapidly grow? Where there’s a lot of uncertainty around it and systems are moving quickly? What are some parallels that we could draw from? Any reflections that you all have? 

I see a few head nods. I'd be curious, Sayash, if you want to start first and if others want to chime in as well.   

SAYASH: Absolutely. I can go over two parallels. In one, we did a relatively decent job of ensuring access and broad accountability. And in the other, I don't think we did such a good job. The examples are the internet and connectivity.  And then social media about a decade later. So with the internet, from the get go, one thing that was prioritized was decentralized access. 

So things like the internet protocol allow essentially anyone across the world to set up their own web server and to access the internet in order to put their content up online. Of course there were issues and kinks along the way that we figured out. But we are in a state today where we can say that that broad vision has been somewhat successful. Of course, there are concerns about concentration at the internet provider level here in the U.S. But globally, I think we've done a decent job at at improving the rate of connectivity, improving how quickly people can get access, improving this broad ability to get on the Internet in the first place.  

In contrast, with social media, I don't think we were able to do this. In my view, one of the main reasons for that was concentrated platforms. That was how people started accessing social media. If you ask someone in another part of the world which social network platforms they have access to, it’s likely to be one of a small handful. These platforms dictate what constitutes acceptable speech. They set the boundaries for what you can and cannot say. They also govern how people are treated online. 

So, for example, the content moderation systems that we have in the US and in Europe are far better than what’s available the rest of the world. That’s been responsible, at least in part, for many atrocities. The atrocities in Myanmar, Sri Lanka, in Ethiopia. I think this divide is almost invisible to users in the West, in Europe or the U.S. when they log onto Facebook.  

The experience is completely different from someone who logs on from a country where Facebook does not invest the resources to protect its users. I think that’s the result of said concentration and the profit incentives of companies – to do the bare minimum when it comes to keeping users on the platform while still getting away with as little intervention as possible. When it comes to AI, those are two broad analogies. 

Of course, all analogies can be misleading. But in this case, it's useful to think about what that lack of concentration with respect to the internet protocol bought us.  And what we lost when platforms acted as bottlenecks through which we accessed social media. So how do we want to shape the future of AI? In a centralized or decentralized way? What are things that we might want to adopt from both models?  

MERVE: I'll jump in again.  I think in terms of the recent, more rapidly-adopted technologies, Sayash is spot on with internet connectivity and social media. If we expand the timeline a bit longer to include not-so-rapidly-adopted technologies, but technologies that have significant impact on our lives and our society, I think we can draw a parallel to regulated products or sectors where we're looking at safety at large. For example, road safety, aviation safety, civil engineering, et cetera. In a lot of these cases or domains,  you learn from mistakes. And sometimes those mistakes resulted in significant casualties.  

We understood that we needed to put safety and liability measures in place.  And eventually, with those governance, accountability, liability and safety guardrails in place, these products became more beneficial. They are scientifically valid as well. We know how to, for example, check the integrity of a building or bridge, or the aerodynamics of a plane. 

We're not in a position to independently verify each time we're getting on a plane or into a car or building. There's trust in those safety and governance methods. As we're looking at policies, safety and governance at large, we should look at those products and accept that guardrails make them better.  They make them safer and they make it more attractive for consumers and businesses to adopt and invest in.  

JUSTIN: I think Sayash and Merve hit most of the major ones. But we can look at other examples from the past,  moments in time where there were big industrial investments – things like the railroads, cars, and road infrastructure – that we built in this country around the turn of the century. 

These were moments in time where there were big industry efforts to use new technology  to change the way that we do commerce, the way that we live, the way we transport goods and services and people.  All of that intersected with state power and questions about the concentration of industrial power. We saw big monopolies form. We saw efforts by the government to try to restrain those monopolies. There's always an effort to think about how to balance the public interest with the corporate interest. It strikes me that we're at another point in time where that's become very, very necessary.   

One of the interesting things that Sayash made me think about is the extent to which, right now, a few large AI companies dominate this field: the Googles, the Microsofts, the Facebooks of this world. There is extraordinary capital required to train the next generation of models; to build and acquire all of the necessary materials; to set up the infrastructure to build the next generation of AI technologies. In many ways, governments have become clients to this activity. 

In a lot of ways, they're saying, “Listen, unless these companies succeed, we can't succeed. The U.S. government can't maintain its security promise to the people. It can't maintain its promise around economic development.” So we're beginning to see the state put all of its eggs in the AI basket to protect us, to maintain supremacy. 

I think this is a very interesting moment. It seems to be happening very, very quickly. In Europe and other places around the world, you see concerns about any effort to regulate AI. As though it might put you on the back foot when it comes to military, economic or strategic advantage. So this question about corporate versus public interest feels like it's at the core of everything.  

KRISTN: I'm curious if you all had more to add on that separation of authority and power. We're in that messiness right now, when we try to put safeguards in place. You’re advising governments, you're trying to think through it, you see where there are risks and threats. Merve and Sayash, any other thoughts on what Justin was just sharing? 

MERVE: One thing I would add has to do with incentive structures, especially for big companies where power and resources are concentrated. These are publicly traded companies with publicly traded stocks, right? So there's the shareholder element. 

Yes, the CEOs hold a lot of power. But they also have a responsibility to the shareholders. And sometimes those goals – more market power, more profit, et cetera – don't necessarily align with public interest, at least in the short run. If you're looking at the long run, it makes sense for shareholders when companies are responsible and put more trustworthy products on the market. Because that eventually impacts adoption rates, regulatory fines, environmental impact, you name it. But because we're looking at quarter to quarter annual earnings, the time frame and incentives are not aligning with the public interest.  

That's one other thing keeping me at night. We're not looking at the structural issues that are creating this concentration of power, this rush for private companies to create patches around governance and policy in order to exert some limited control. I think we have more fundamental issues with incentives and motivations.   

SAYASH: That short term incentive problem also shows up in companies selling products that do not work. I think everyone wants to get on the bandwagon of selling AI tools. But in the last 5-10 years, we've seen dozens of products that promise to do things like predict if a hospital patient will get sepsis, if someone will go on to commit a crime, et cetera. These are tools used in healthcare, in education, and so on. And so many of them just do not work. 

That’s a consequence of the hype around AI. We've made rapid, transformative progress in some types of AI, notably in products that generate text or images. But in many other types of AI, we haven't really made much progress. For example, when making predictions about people's futures, there are inherent limitations that exist due to noisy data or not having enough data on people. But because both generative and predictive technologies share the same umbrella term, AI, they’re seen as equivalent. Progress in one is conflated with progress across the board. That's why we've seen governments and private companies fall prey to all sorts of AI products that do not work as claimed. And yet, because of the hype around AI, companies continue to sell these products.   

KRISTAN: I think there are a lot of folks, including myself and probably many in this audience, who are still wrapping our arms around the array of benefits and potential risks. What does this mean for our day to day lives? When I was talking with a friend of mine about this, they offered the framing of “Can we do it versus should we do it?” which I really like. So we've talked a lot about some of the risks. We've talked about where AI is overhyped. But what are some of the benefits? And how do you think about where we are in terms of understanding the sheer volume of benefits and risks? Any real world examples that you all want to share?   

MERVE: In AI governance and AI design decisions, et cetera, we call that question zero. Just because we can do it, should we? It should be talked about before you even start collecting data or get into design and architectural questions. There is a definitely a techno-solutionist approach that assumes that every problem can be solved with technology. In this case, with AI technology, whether it is scientific or not, whether it works or not, or whether it takes into context into consideration when solving a problem. Most of the time, these AI products are developed by a group of people who say “There are housing issues in this particular neighborhood or region.” We'll solve it with AI.  It will be the best thing. It will be the solution. Not necessarily taking into account the historical or structural issues that caused that housing issue in the first place. 

It might be that there's not enough supply, that the prices have gone so far up that people are not able to afford it.  But we're not looking at the historical or structural issues. We’re trying to create patch solutions using AI technologies, and they are doomed to fail.  

SAYASH: I'll be a bit of a contrarian to something Justin said in his first set of remarks. I do think that, in the next couple of decades, AI will become indispensable to most knowledge workers. These are people who think for a living, broadly speaking. I do think it has at least the potential to be a transformative technology in that way. I can give a few examples of how I use AI, and that’s basically to automate different tasks that I might have. They can take three or four hours or an entire day to do. So my website is created entirely using AI. I have stopped doing web development and have delegated that to tools like GPT over the last year or two. About half of the code that I write for my research – at least the first draft – is now written using AI. Of course it's not good enough that it can automate the entire process. But it is good enough that it saves me a lot of time, so that I can either verify its output or quickly spot errors. 

If there is an issue, I can edit it manually. If there isn't, I can just accept the code written by an AI system. But my point is that this transformation will take a lot longer than people expect it to take. So it's not that, if we have an AI tool that can be accessed online, all the economic benefits of AI will unfold.  

The time when we’ll start seeing the economic benefits unfold is when AI products are diffused across society. I read this very nice book, which I would recommend to everyone interested in the subject here, called Technology and the Rise of Great Powers. 

It posits that for general purpose technologies – for technologies that can be adopted across many different domains – the economic impact is realized when these technologies are diffused through multiple productive sectors of the economy. So if you want to look at how AI can be useful in this way, it won't happen in the next few years. It may be not even in the next decade.  

It'll only happen once we have the right tools for adopting AI in healthcare, whether for transcribing notes or figuring out how doctors can save time. It will happen when we can outsource administrative work to AI systems in education, when we figure out the right tools to enable learning. 

This also touches on one of the questions in the chat. Right now, the misuses of generative AI are much easier to identify than the productive economic uses. But I’m quite positive on the outlook for generative AI in the slightly longer run, in the next 20 or 30 years, when we figure out to use generative AI to improve education and learning outcomes and so on. 

But I think, in the meantime, there are these negative impacts that it has on education. In particular, ChatGPT has singlehandedly upended entire curricula. I think teachers were forced to think through their entire syllabi again. And that sort of thing leads to more discussion about risks than benefits.  

Once these technologies are in circulation, all of the negative externalities are forced onto society. The negative externalities of redoing all of the homework for K-12 students across the U.S., or even across the world, are not borne by Open AI. If Open AI had to pay teachers for the time it takes to redo their assignments, then the company would quickly go out of business.   

That's why I think there's so much more focus on risks in the nascent stages of technology. But looking a little bit ahead, I'm quite optimistic about the benefits and transformative potential of AI in many different industries. And in some sense, I think we have what can be thought of as a general purpose technology with generative AI. 

JUSTIN: Sayash and I are probably not far off in terms of our perspective. Maybe it's a time horizon thing. Given the trajectory we're on with regard to the structure of the economy and to the health of democratic governments across the world, there are questions raised by AI technology. In 10 or 20 years, will artificial intelligence have improved the lives of most people? Will it have created a more just, equitable, sustainable, democratic world? I don't know the answers to those questions right now. 

Might the tools work better? Yes, absolutely. I'm certain they will. The generative tools will be able to reason, perhaps, better. We'll have better guardrails around them. Maybe other  lanes of research into artificial intelligence will turn up other improvements. But I don't know if we're going to end up in a world where more power and wealth is concentrated among a few. Or if we’ll end up in a world where people have less privacy and are surveilled more. I’m not certain whether people will have more free expression or less. I think that's probably my fundamental question about artificial intelligence at this point. 

It's not really like, will the AI work? I think it's more like what Merve was saying: Are we introducing AI into scenarios where it may do more harm than good?  

One other thing I’ll say is that I'm very interested in some of harms that are happening now. I think the recent lawsuit around Character AI and the teen that committed suicide is interesting. I think we'll probably see more lawsuits like that, where people who have interacted with various AI systems harm themselves as a result of those interactions. These are questions that we should be asking ourselves. Should children be interacting with AI agents in this current form? Should they be interacting with them without supervision? I don't know the answer.  

But there are Silicon Valley companies that have drawn millions in investment – and they’ve gone ahead and pushed these products out into the world. As Sayash mentioned, there are lots of doctors and nurses wandering around using different generative tools to collect information at the bedside, to do data entry, things of that nature.  

Is that a good idea? I'm not certain. News organizations are utilizing some of these AI tools and we've already seen some blow ups around that. In my industry, that's a big deal. People are introducing misinformation and disinformation into the environment, often inadvertently. So I don't know. All these things concern me. I would like to believe that science is right, that technology will deliver.  But will it deliver the society we want in the context that we want? I'm still not sure.   

MERVE: I think it’s also important to define the parameters of the society. Are we looking at your state, my state? Are we looking at America? Or globally? I think AI will work for some people who have more access to technologies, who can afford some of these technologies, who have more access to education and other resources. Whereas we have  3 billion people who have not even used the Internet or done a web search in their lives. That's a good percentage of the current population. We have significant portions of the globe where broadband access is not taken for granted, where people have to pay three or four days’ wages to get even a few megabytes of data access, if they can even get to that. 

So the bigger the frame is – the more inclusive that frame is – we will see larger divides. I personally do not see these gaps being narrowed by generative AI, at least in the short or the middle term. I think those gaps will widen.  

KRISTAN: Thank you. I think we should come back to some of those gaps and disparities. But I wanted to pick up on something Justin mentioned about the future of societies and the role of critical thinking and critical writing. 

Where does this tool come in? Are we outsourcing some of that critical thinking? How big of a concern is it? How should we be thinking about that? What is the potential societal impact of not teaching somebody really important skills? 

JUSTIN: This is another area where I'll just jump in quickly. I feel like I have more questions than answers. And I don't know really the answer to the question. We don't yet  know whether students will be at a disadvantage, in terms of critical skills, as a result of artificial intelligence being introduced into their toolkit. We don't know if it'll make them better or worse writers. We don't know if it'll necessarily change the way they approach the idea of acquiring knowledge and participating. We do have a lot of anecdata. There's an educator in the chat right now sharing concerns about  what they're seeing in the classroom.  

I've heard other types of concerns like that. I've seen those types of things in my own experience. Yet I've also seen students do extraordinary things with generative AI. I've seen them develop first drafts of ideas, take artifacts from brainstorms, spin out ideas that maybe they wouldn't have otherwise. So it kind of depends on context. It depends on the situation, the age, and the subject. I'm not exactly sure when it comes to young people in an educational context. And I don't know about the social context either. We'll just have to see.  

Like the lawsuit that I pointed out, these are anecdotal stories. I don't think that we have longitudinal data to tell us how this stuff's going to impact society. But we do have the experience of having wired up a good amount of the planet over the last few decades. As Merve says, we've left a few billion people off the network at this point. But we've seen how it's changed our societies. It's changed the way that we interact. It's changed our politics, our relationship to power. It’s had a profound impact. So I don't know. We'll probably find out more on the timeline that Sayash was talking about, in 10 to 20 years. We'll probably be able to look back and say, “Here's how it changed things.”  

SAYASH: There is just so little information about how people actually end up using generative AI systems. Because a lot of it is proprietary. So the only people who can, at this point, truly research AI impacts are the companies themselves. 

We had a very similar situation occur with social media. We don't, to this day, know the precise impact that Instagram has, for instance, on the comparison crisis. Like how it affects people's self images and so on. We have very small studies. But the data needed to understand the impact of that technology on children and society at large just does not exist. So instead of waiting to find out the impact of these technologies, we could push for legislation that requires companies to do this type of research. Or push to allow data access for external researchers with the right motivations. Without that, my assessment is we might simply continue to misunderstand the impact of AI on kids’ wellbeing and on society writ large. But that is an area where things can be improved through policy.  

KRISTAN: Great. There are a  couple of questions from the audience that touch on another aspect of society: work. There are people who don't want to adopt AI, but worry that they’ll be forced to by the marketplace. What impact do you foresee in the work environment? Should it be regulated? Is there a way to protect workers’ rights if needed?  

MERVE: I think there are different elements of AI. Like when we say AI systems, are we talking about worker surveillance systems that are imposed upon the workers? Are we talking about AI as an exploratory tool through which you can do data analysis, research and manipulation? Or we talking about generative AI and whether that is going to be pushed down your throat at work? 

I think that the being more specific is helpful in this situation. Especially for scenario planning. With generative AI, any one of us can go online and download one of these apps and use it. But for an enterprise adoption of generative AI systems, that is a significant  investment for organizations, both public or private.  

And it's not a one off, right? It requires you to understand the product. You have to have internal talent, internal tech stack. You have to understand the costs of hosting, processing, inference, et cetera, on an ongoing basis. You need governance. So enterprises, at least the ones that I’ve talked to, don't make generative AI investment decisions lightly. Even though there is a lot of hype, they're  still trying to understand the risks, the costs, the investment that goes into it, and how much of an impact they can actually foresee on their organization. So, in that sense, I don't see any of us being forced to use generative AI by our organizations any time soon.   

But, of course, there is an expectation of increasing AI and digital literacy to start testing these products. And more recently, companies are looking for people who have more engagement with these products rather than someone who has none.  

SAYASH: One example comes to mind, Merve, that illustrates that point. Maybe 15 years ago, there was a big push for the enterprise adoption of cloud infrastructure. So your data, your computing, would all be stored in the cloud, rather than locally on your device. That was framed as the future of enterprise. And I looked at the numbers from 2023. So in the 10 to 15 years where this massive push was happening, 25 percent of all industries that relied on access to the internet adopted cloud infrastructure. So even with more stable technologies that are better understood than AI, adoption by companies has been slow. I think this will continue for AI as well.  

JUSTIN: This question about work and labor is one that I hope to learn more about in the year ahead. The other is the environmental impact, which we haven't talked about a lot here. Over the weekend, I was talking to someone I know who is a Denver healthcare professional. And that person was telling me how much of their work has become about data entry over the last few years. They talked about how many hours a day they spend repeatedly entering figures, checking boxes, doing things that tell a computer what’s going on in the physical world.  

Of course, the goal of the system is to do various diagnostic or analytical things, to signal  to doctors that somebody needs help, et cetera. I think there probably are other motivations to avoid claims of malpractice or to provide substantiation for different insurance decisions. Whatever it might be, I'm sure there's lots of logics behind health care decisions.  

But it's interesting to me the extent to which people feel like  it's become a big part of their life, feeding information into an app. Or taking direction from an application. There are huge numbers of workers in the US who work for companies like Amazon, Uber and Lyft and don't have human managers. These are workers who are hired, fired, managed, dispatched, and, in some cases, disciplined by applications.  I think that's an interesting phenomenon and something that we'll have to watch over the next few years. To see how people feel like their relationship to work and employment has changed in that context.  

KRISTAN: We were talking about the concentration of a few large tech companies holding a lot of power having a lot of influence. And an audience member commented that there are already open source, large language models available. You can imagine a future wherein more of those become accessible. Where do you think the concentration of influence exists today? How do you think about the opportunity and risk of new generative AI systems coming online? 

SAYASH: Yeah, I completely agree. I think open source models and open source infrastructure for AI is pretty important. And I think it will continue to become more important as the competition between incumbents intensifies.  

The best open source AI model in operation today was built by Meta or Facebook. They've explicitly stated that the reason they're trying to do this is because they're commoditizing the compliments of the models. They're commoditizing the apps that use these models. And in some sense, that’s a competitive blow to other incumbents like OpenAI or Google, their competitors on other platforms. That also means that the best open source models today are at the whim of a single company. 

Now that might change very soon. We are seeing other companies start to up their game when it comes to releasing these models. But these models are capital-intensive. The latest model that Meta trained cost them, by some estimates, over a billion dollars to release. Just in terms of the computing resources, hiring all the people who worked on it, and so on. 

To some extent, there is going to be some concentration of power that comes with building the model itself. Where this becomes more problematic is if the concentration extends vertically through the stack. So if the same people who are building these models also start to build the products that use these models, they have even less incentive to share these models. So there would be less competition at the product level. 

We've seen this before with internet infrastructure. Companies that provide internet services are not allowed to preferentially give access to different applications that use the internet. They are obligated to serve different applications that use the internet in the same exact way. I think we need something similar to prevent vertical integration. Something that stops the model developers from being able to develop a monopoly on the applications that use generative AI. It’s not just about which companies are building these products. It's also about how much competition there is across different levels of the technology stack for building AI. 

That is something that we know how to fix in terms of policy.  At least we have had some success in doing that. For example, with the FCC and the internet. And I'm hopeful that this sort of policy would reduce concentration and improve competitiveness when it comes to what applications can be built using these powerful AI models.  

KRISTAN: Are there other examples where some near term regulatory attention is needed?  

JUSTIN: I think that there are some near term harms that we need better solutions for. One is sexually explicit digital forgeries. Some people call them non-consensual intimate imagery. But most people call them pornographic deep fakes.  

Some of these tools have made it easy to produce this type of material. There’s no good data. But there are a lot of anecdotes of schools and communities being torn apart when someone decides to make one of these videos or images and circulate it. The targets are almost always women, but not entirely.  

Right now there aren't significant enough laws in place to demand that platforms remove this material. States have begun to pass laws that make it possible to hold people who create that type of material liable. But it's still very difficult to demand that a communications or social media platform remove that material from the internet. Of course, once it's on the internet, it's out there. We're going to have to look at things like that. There's also an enormous amount of effort that's been put into election-related issues. 

On that last subject, I understand that the Take It Down Act just passed the Senate yesterday. But this question around election issues has run headlong into the First Amendment in some cases. There are questions over veracity of content, political campaign messaging, et cetera. I think that's going be complicated.  

MERVE: We already have some AI governance frameworks in place, but we need better enforcement and better monitoring.  

For example, laws like the Civil Rights Act, specifically Title VII, prohibit discrimination. Agencies in the U.S. such as the Equal Employment Opportunity Commission, the Department of Housing and Urban Development, and laws like the Fair Credit Act are designed to prevent discrimination or adverse outcomes. 

But we are increasingly seeing adverse impacts from AI systems across different areas like employment, housing, healthcare, credit scoring, access to insurance, et cetera. Still, holding the actors accountable has been a challenge. Agencies often don’t have sufficient resources and expertise to do that. 

Another obstacle is the proprietary nature of many AI systems. Courts and regulators can’t force them to disclose their algorithms or design decisions. That makes it difficult to enforce existing regulations, so discriminatory outcomes are already happening. We need better know-how and better enforcement mechanisms. To Justin's point, the shortcomings of our existing regulations and agencies are really related to issues of liability, defects, and the challenges posed by newer, emerging technologies. 

Over the years, we've had many incidents involving predictive AI bias and discrimination. This includes what we refer to as predictive versus generative AI, where biases and disparate impacts have happened. So now we’re looking at those regulatory and policy gaps, particularly in areas like defects in AI systems, generative AI, and transparency at large. 

KRISTAN: Building on Merve’s point, I want to dive into this tension we’re seeing between the need for strong regulatory frameworks and the challenges of actually putting them in place. On one hand, we don’t always have the resources, expertise, or even the right people to figure out exactly what needs to be regulated or how to address the risks. On the other hand, there’s the proprietary nature of technology – intellectual property and the drive for innovation – which can feel like it’s working against regulation. 

But innovation and safeguards don’t have to be at odds. They can coexist but it’s just tricky to find that balance. This is something people have written papers, built careers, and spent years thinking about. So I’d love to hear your thoughts on where we are right now. How do we strike the right balance between allowing technology to grow and adapt while still putting the right protections in place? 

Are there some practical, low-hanging fruit policies we could focus on? Maybe something like the Take It Down Act, though I’m not super familiar with the details of that one beyond what’s been mentioned. What should we be doing, as a society and as policymakers, to manage this balance as technology keeps evolving? 

SAYASH: I can start, although this subject is very much in Merve's ballpark. I think one of the best things governments can do is set standards through procurement. The U.S. federal government, for instance, is one of the biggest purchasers of technology. 

One of the tensions you highlighted often arises when companies are forced or compelled to disclose certain things about their technology. But if you set disclosure as a precondition for procurement, like a standard that a company must follow for the federal government in the U.S. to buy the technology, then that isn't compelled speech because companies are entering into a contract voluntarily. The U.S. government has used this approach multiple times over the past decades to proactively set standards for responsible tech, reduce concentration, and so on. 

One example that comes to mind is that the Office of Management and Budget in the United States requires that the software they procure be open source. I think similar requirements underlie many other federal procurement standards as well. This has led to a de facto approach where vendors open-source the software they're selling to the government. 

I think similar strategies could apply to AI. Procurement might be a very good mechanism to set standards without running into issues of compelled speech or blocking innovation. Because technically you aren’t forcing companies to do something. Only if they want to enter into a contract with the government do they have to comply with these standards. 

MERVE: I’m definitely in favor of using procurement as a lever to set standards and better practices. In terms of other measures or protections, we can draw useful parallels from cybersecurity audits, tax audits, or financial audits. In these cases, trusted parties are given access to an organization’s operations, where they independently test and verify processes. They then issue a report, like a financial auditor or public accountant might do on behalf of the public. If there are issues, they’re disclosed publicly, but without revealing everything they accessed during the audit. We could adopt a similar approach for companies, where we mandate independent audits or oversight for the public interest. That would provide us with some transparency. 

A third thing is that companies use different standards depending on the jurisdictions they operate in. For instance, when deploying AI products or algorithms in Europe, you have to abide by data protection regulations. There are certain things you can and cannot do, and obligations you have. But in the U.S., where there’s no federal regulation and many states don’t have robust laws, companies can do whatever they want.  

So it’s not an issue of technical hardship or business limitations. They just choose not to deploy the same standards because they’re not mandated to do so. As consumers and policymakers, we could demand the same level of protection already being implemented somwhere else. Learning from international regulations and applying those best practices here could be another lever. 

KRISTAN: Justin, I’d love to hear your thoughts on this. Thank you both, Sayash and Merve. I noticed one of the links you shared references Colorado, where we recently passed a state bill – one of the first at the state level to address these issues and put some safeguards in place, even going beyond federal efforts. As someone based in Colorado, I remember this bill being celebrated as a good first step. But it wasn’t great. It has flaws and needs to be improved upon. But it shows we’re taking initiative. 

Justin, whether it’s the Colorado bill or other bigger trends, I’d be curious to see if you wanted to add to the conversation. How are you seeing regulation evolve to address these challenges? Are our efforts striking the right balance? 

JUSTIN: Building on some of the points already been made, privacy protections in the AI age should be fundamental. Knowing how your data is handled, fed into systems, and used to determine outcomes in your life is very important. And of course, in the United States, we don’t have any comprehensive privacy legislation. We’re nowhere near the kind of robust AI legislation that Europe has passed. Some of the measures in Europe, like those under the GDPR, might feel like basic safeguards or table stakes. Even though some would argue that the GDPR hasn’t been effective, a lot of that’s completely lacking in the U.S.  

That’s not necessarily the case in Colorado. It offers privacy legislation that, to some extent, protects its citizens from the broader gaps in U.S. privacy laws. It’s also the first state to pass comprehensive AI legislation. I don’t want to myself comment on its efficacy. I don’t think we know that yet. Some people feel that the law goes far enough and some people don’t. But the state has also passed some kind of amendment to its privacy act that also protects neural data, which is interesting. So Colorado is thinking about the next generation of technologies, when AI may interface more directly with us, through brain-computer interfaces. 

So Colorado is kind of out in front on a lot of these questions, along with states like California and Washington. It will be interesting to watch what happens and the extent to which these issues are being addressed in a local context. 

KRISTAN: Okay, we’ve got maybe 15 or so minutes left. I want to touch on a few big things, and if there are other audience questions you really want us to cover, please drop them in now. But I’d like to come back to where we started, when some of you were talking about the threats that generative AI and other tools could pose to democracy and democratic values. 

We’ve already talked a little bit about some of those threats, and elections were mentioned. I’d be curious to hear what you all see as some of the biggest risks. Trust and misinformation have come up in the chat, which is why I’m bringing it back into the discussion. Where and how do we know what information to trust? That’s becoming extremely complicated. 

We’re seeing misinformation and disinformation grow really start to abound, and potentially become even bigger challenges. So there are a lot of risks and threats to consider. I’m curious about what you see as the most significant ones. And on the flip side, I’d love to hear your thoughts on opportunities. Are there ways these tools can be used for good? Maybe we can look at both sides of the coin when we think about democratic values. 

MERVE: I think we first need to define what falls under democratic values, because that’s the mission of my organization, Human Rights and Democratic Values. When we talk about democratic values, we’re talking about fair elections, representation, protection of minorities, oversight, access to information, fair journalism, and accountability. 

If we look at how AI is impacting these areas, each one is affected in very different ways. But for me, the bigger question or theme is democracy and trust. We’ve reached a point where many of us are questioning the reality of what we’re seeing, hearing and reading. Have you ever asked yourself, 'Is this real, or is this fake? Was this generated?' That’s the moment in time we’re living in now. 

When you start questioning everything, especially in a polarized political environment, it rips apart the fabric of society and fabric of trust at large. I don’t want to sound like a doomsayer, but I think this is a critical call to action for all of us to understand why we need to act now. We can’t afford delayed policy, delayed citizen and consumer action, and delayed government involvement. 

So, like I said, I would put democratic values and trust as the bigger title, with everything else we’ve discussed fitting underneath that. As for strengthening democracy or using AI for beneficial purposes, we can always flip the coin. One thing I love about AI is its ability to explore data at scale and uncover things we don’t intuitively understand. 

AI can help find structural biases, systemic issues, or inequities, et cetera, in different domains. It can also reveal where data is lacking. I think that exploratory capability is really beneficial. That’s one of my favorite areas for reusing AI.  

SAYASH: So I really like this framing of treating democracy and trust as high level goals and then looking at individual problems underneath them. I’ll focus on two problems, one that I think is real and concerning, and one that I believe has been overplayed, if not a distraction from the real issues, especially when it comes to AI and disinformation. 

The first problem, which I think is very real, and we’ve already seen evidence of this, is what’s known as the liar’s dividend. Let’s say a photo of a powerful figure doing something controversial goes viral. If people are questioning them, they could just claim the photo was AI generated. We saw this happen when Donald Trump claimed that a photo of Kamala Harris’s crowd size was AI-generated because he didn’t believe the crowd was real. We’ve seen this over and over again, more so in the global south. In India, for example, politicians have claimed that evidence of corruption was AI generated. As people become more skeptical around information, the liar’s dividend will increase to the point where people can reasonably claim that any piece of media is AI generated and hope to get away with it. 

The second problem, which I think is less important than it’s often made out to be in the news media, is the idea that AI will make people believe events that never happened or change their minds about whom to vote for. There were two examples from from the 2024 presidential elections that illustrate this point. There was a fake robocall using an AI generated replica of Joe Biden, asking New Hampshire voters not to participate in the primaries. 

This was a clear example of election interference. But AI wasn’t really the bottleneck for this attack. The voice could just as easily have been created using a voice actor, which costs around $200 an hour. The causal mechanism for this attack having an impact is that the caller on the other end actually believes that Joe Biden is personally giving them a call, which is a bit implausible.  

Even without AI, this kind of attack could have been carried out with a small budget. So AI wasn’t a necessary ingredient. What I've been doing for the last couple of weeks is going over every single example of deepfakes that was used in 2024 elections, not just in the US, but across the world. 

In basically every single case, the cost of reproducing similar attacks without AI was only a few hundred dollars. There were only 85 examples of false information that were collected. The total number of examples when AI was used for misinformation was less than 100, according to Wired’s election project. That’s compared to the tens of thousands of pieces of misinformation fact-checked around the world. 

So I think when it comes to AI making people believe in things that never happened, I thik all of the evidence points the other way. People tend to be a lot more skeptical of information that they have doubts or hesitation about, especially in a polarized society. If there are things you already believe in, if there’s a worldview you already agree with, then your standards for skepticism are way lower, to the point where AI probably isn’t even needed to convince you.  

This has been called the ‘cheap fakes’ phenomenon, where you have edited content or even video game footage that was convincing enough to people. So for all these reasons, I don’t think AI dramatically changes the cost of convincing people to believe in thing that never happened. In most cases, our response to new information is to increase our skepticism unless we already agree, in which case our skepticism is very low. So, in neither case does AI play a major role. 

JUSTIN: I suppose I'll just add, since you asked about positive examples. Are there any? I believe you asked about that. We have seen some around the world lately. 

One example is the ability of the former Pakistani prime minister to campaign from jail using an AI voice clone of his speeches. Those recordings circulated, allowing him to still participate in the democratic process. It’s an interesting example of AI being used to give voice to someone. Some people have pointed out how campaigns with limited capital have been able to generate great ad materials and things of that nature. It certainly could be seen as a benefit to political campaigning, since it helps level the playing field. It’s making it possible to do more with less.   

I'm also thinking about Sayash’s point about how much he’s using generative AI for coding, and the extent to which people are now able to build websites, tools, databases, scrape and combine information. That may help.  

Another potential positive area is the use of AI in participatory democracy. There’s a lot of talk right now about systems that use recommendation algorithms and other mechanisms to try to help people engage in democratic discourse or come to consensus in new ways, potentially at scale. I think that’s all very interesting and should be looked at.  

Some folks are interested in whether and how some of these ideas can be deployed on social media platforms to address issues like polarization or help people come to consensus on matters. But does all of this add up to good news for democracy? I’m not sure. 

I totally agree with Sayash that the fears around a nuclear deepfake, like a fake Joe Biden video that throws the election, haven’t materialized yet, certainly not in the United States. But AI does challenge some epistemic fundamentals, as Merve pointed out. I’m very interested in how AI-powered search. AI-generated search results may contain misleading, false, or marginally wrong information. I don’t think we know quite what that does, having that authoritative knowledge panel present with errors in Google. And it’s impossible to tell.  

I’ve experienced this myself when my kids have a question at the dinner table. One of them will Google it and then read whatever Google serves up. It’s just not quite right. And I think there’s a big question about what this means for the substrate of our conversations these days. It’s the Google search. So many of them start with “I can’t remember that fact. I’m not sure about that. Ask Google.”  What is the effect when that foundation, the stuff we assume is correct, is marginally wrong? What does that do to our ability to relate to reality and to one another? I’m not sure. We’ll find out. 

KRISTAN: I’m laughing at that because I’ve seen so many random, off the wall things in those Google search returns. That has helped me, personally, when I have thought about these tools, where they are now and how good they are. It helps me keep a healthy skepticism.  

Maybe that’s my advice for everyone listening today. We have to think critically, teach those critical thinking skills, do our due diligence, and keep that healthy dose of skepticism.  

Okay. I'm going to let you all have the last word here. Thank you so much. It's been a fabulous conversation. We could spend much more time together.  I'll leave it a little bit open-ended. I'm going to give you a directional question to answer. If you don't like it, then leave our audience with whatever closing thought you’d like.  

One of our reasons for having this event was to try to dream big, to think of our future state and where we’re headed. But we also wanted to capture some of the questions that we don't yet have answers to, that maybe we wished we did. 

If there was a big question that you wish we had the answer to today, what would that question be?  

SAYASH: I think the big one for me is, how do we make sense of how people interact with generative AI? This involves both the constraints on what data access we have and the fact that the answers you get from a generative AI system are highly sensitive to how you frame the questions. Simple things like changing the order of multiple choice answer options, can lead to drastically different results from these models.  

So we need to figure out both the technical challenges – how to evaluate these systems better – and how to solve them. And then the operational challenges, like how to access the data we need to understand how real world users interact with these models.  

MERVE: I was going to say, for me, the bigger question is: how are we going to solve, and when are we going to solve, the broader, higher impact issues like the environmental impact of AI, the data scraping, and the constant need for more and more data. 

And where is that point where we can actually do more research and get more creative about smaller models or more specialized models that can solve problems in particular domains more efficiently, rather than this constant pouring of energy, water, data, and talent into building bigger and bigger models? 

JUSTIN: There's a funny question in the chat. How do we know all the panelists are real? Could a low budget hacker create an avatar?  I think I heard this from Hani Fareed, who's a real expert on media forensics at UC Berkeley. He said if you're really worried someone's not real, ask them to turn their screen or turn themselves sideways. Normally the distortion effect won't hold. So if we all just kind of turn ourselves sideways a bit, you can trust that we're real. I guess one thing I'm thinking about right now is how do we get more people involved in the conversation we're having? 

Kristan, thank you. Thanks to the Institute for Science Policy. Thank you all for hosting this. These are really fundamental issues. Sayash and Merve have spent their waking hours trying to engage people on these questions and explain why these things are very important. But I think that's one of the big questions that I have going forward. How do we engage more people to feel like they have agency in tech policy conversations? Those are often dominated by a handful of voices, entities and organizations, and often feel like they out of reach to people. I think that's one of the big questions I've got going forward. So if anybody has good ideas about that, let me know.   

KRISTAN: Fabulous. I think that's a lovely note to end on. Thank you all. Thank you Merve, Sayash, Justin, this was an absolutely lovely conversation. So much to think on and great resources to be shared as well. These resources will also be made available on our website and in email communication. 

For more on this series visit our 2024 Symposium page.  

Disclosure statement:
The Institute for Science & Policy is committed to publishing diverse perspectives in order to advance civil discourse and productive dialogue. Views expressed by contributors do not necessarily reflect those of the Institute, the Denver Museum of Nature & Science, or its affiliates.