Neil Chilson Helps Turn Knowledge Into Benefits for Humanity
Think tanks are a vital part of the policy ecosystem, but what do they do? In this installment of Science Policy IRL, host Jason Lloyd talks to Neil Chilson, head of AI policy at the Abundance Institute. He has been involved in science and technology policy for his whole career, previously practicing telecommunications law and serving as the Federal Trade Commission’s chief technologist.
In this episode, Chilson discusses what it’s like to work at a policy think tank, the questions about artificial intelligence that motivate his work, and why he is optimistic about our technological future.

Resources
- Check out Neil Chilson’s book, Getting Out Of Control: Emergent Leadership in a Complex World.
- Find more of Chilson’s work on his website and explore his Substack, including:
- “Red Teaming AI Legislation: Lessons from SB 1047”: How the concept of “red teaming” can be applied to creating legislation.
- “10 Years After the Best Tech Policy Movie Ever”: Lessons from The Lego Movie for emergent leadership.
- Learn more about the Abundance Institute’s vision for artificial intelligence policy by reading their recommendations here.
Transcript
Jason Lloyd: Welcome to The Ongoing Transformation, a podcast from Issues in Science and Technology. Issues is a quarterly journal published by the National Academy of Sciences and Arizona State University.
I’m Jason Lloyd, managing editor of Issues. On this installment of Science Policy IRL, I’m excited to chat with Neil Chilson, head of AI policy at the Abundance Institute. We’ll discuss what it’s like to work at a policy think tank, the questions about artificial intelligence that motivate his work, and why he’s optimistic about our technological future.
Neil, welcome. Thanks for joining us today.
Neil Chilson: It’s great to be here.
Lloyd: I should note at the top that you and I have been friends for a long time, but we rarely discuss work. So I’m really excited to have the opportunity to talk about your job today. But first, we always start this series with a question about how you think about the field that you work in. So how do you define science and technology policy?
Chilson: One of the reasons I wanted to talk to you is because you can help bring me out of my day-to-day in the weeds of what I do and spend a little bit of time thinking about why I do what I do and what exactly it is. And so, this is a great question. And it made me think immediately about what science is and what does that mean.
How do we set up the cultural and the policy frameworks that incentivize people to take knowledge and turn it into practical benefit for humanity?
When I think about science, I think of the discovery of knowledge, and most of my work actually probably falls more on the second side of what you said. It’s more on the technology side. So it’s less about discovering knowledge, but I help try to set up the frameworks and policy for turning that knowledge into practical benefit for humanity. And the frameworks that I think about in that space are what are our primary drivers for, first of all, for generating knowledge, but then also primarily for turning that knowledge into practical benefit.
And the big questions that I have is what are the right ways to create incentives to do that? And so, when I think of science policy, science and technology policy, I think about how do we set up the cultural and the policy frameworks that incentivize people to take knowledge and turn it into practical benefit for humanity? And often, those mechanisms are in the commercial sector; they’re not always, but often they are, and so I think a lot about how to set incentives in the private sector correctly.
Lloyd: That’s really interesting. So if the focus of the policy component is incentivization, is there also a regulatory component to it, or is it primarily what you see as fostering and enabling these technologies in order to facilitate human flourishing?
Chilson: The classic argument for policy interventions—using the tools of the state in order to shape markets or other types of human behaviors—is that there’s some sort of failure. There’s a gap in what’s happening externally that can’t be solved because incentives are wrong. And so yeah, sometimes regulation, by which I would say setting rules for behavior of the types of things that people can build or can’t build and how they can use those, sometimes there is a need for that because there’s a gap in the incentives for people to provide those types of tools.
The classic example are things like externalities, where your plant pollutes a river but you don’t really suffer those effects. It’s other people that suffer those effects. And that’s because the costs that you would normally, in a business, try to avoid aren’t really imposed on you. And so, the question is how do you set up policy incentives so that those costs are in some ways imposed on the people who are what they call the “least cost avoider,” the people who are the best at avoiding that particular cost. And often, that’s the person who’s generating, say in this case, the factory that’s generating it. But that’s not always true. Sometimes the least cost avoider might be somebody else who is better suited to change what they do in order to keep the productivity while mitigating the harms. And so, that’s a major component of policy discussions in the tech space.
Lloyd: So within that sort of framework, what does your day-to-day work in technology policy look like?
Chilson: So I’m the head of AI policy at a think tank, and so right now a lot of my work consists of talking to state and federal legislators about this new technology: the promise that it brings, the risks that they’re hearing about, and what their role is in this job. And so, we have right now a rush of state legislation in AI policy. There’s about a thousand bills that have been introduced since January. Those bills have a wide range of things that they do. A lot of them deal with problems that are often sort of already covered by other areas of law, and some of them identify new gaps.
And so, it’s helping people understand when and why do they need a new law when a new technology comes along, and when do they not. And what should that process be like as they go through and think about dealing with this new technology as it assimilates across society.And so, a lot of it’s talking to legislators. A lot of it’s talking to industry folks to learn more about what they’re doing.
Law is not like code. You don’t just write it, and then it works the way that you wrote it.
We’ve been doing something really fun that I’ve enjoyed a lot called AI legislative red teaming, where we bring together groups of people who are from a wide range of backgrounds: academics, technologists, even some students, people who are interested in this topic. And we have them role play it basically. They take on a specific role, say a state attorney general or the head of an AI lab or a company that might deploy this technology or a civil rights group, the CEO of a civil rights group or the leader of one. And then, we have them, sort of, in the nature of red teaming—which is a term that comes from cyber security—we have them develop the most self-interested motivation that they could have. And so, they write a little blurb about the most self-interested way that they are thinking about that job. And then, we have them take a specific piece of legislation and see how using that self-interest and in that role, they might manipulate the law to serve that.
And part of it is to help us identify how we can make the law stronger, the proposed bill stronger. But part of it’s an educational process for the people who are doing this to say like, “Oh, it turns out,” because this is a common misconception, especially I’ve noticed among engineers, “law is not like code. You don’t just write it, and then it works the way that you wrote it.” There’s a lot of other people involved. They’re all going to bring their own perspective and their own interests to the law. And so, helping people really grock that by actually stepping into shoes of somebody else and try to manipulate a law, I think, is very educational. And it really helps us stress test laws. So that’s been a really fun experience. That’s not a giant chunk of my job, but it’s that sort of creative thinking about how we can help people think about policy better as well as develop good policy that is really satisfying, one of the most satisfying types of work that I do.
Lloyd: That’s very cool. Can I ask you about almost just the logistics, the basics of reaching out to state legislators? How do you know who to talk to? What are your channels for communicating with them? Are you talking to staffers or state representatives? How does that process work?
Chilson: Yeah, it’s really hard. I think people often when they think of government, they just immediately think about federal government and Congress. And that’s a lot of people, but it’s small compared actually to the number of people who are in state governments. And so, they’re really very different. And each state, their legislatures tend to have very compressed decision-making periods. And some of them only meet every other year, and so things move very fast at the state level. And so, often it’s a lot of pre-work of knowing people on the ground already.
A lot of state legislators are not full-time. They’re part-time workers. They have other jobs. And so, they often run very thin shops and they have very limited time.
We have a lot of think tank partners at the state level in lots of different states who we have long-term experience with. And they are typically not super focused on technology or technology policy, but they have close relationships and they understand how their state legislators work. And so, often a first step is to connect with a partner in the states and be like, “Hey, who’s the right person to talk to?”
There are some states where we have repeat players, where we’ve been there a couple times, we’ve done work, and so, we already have relationships. But it’s hard to keep up with everything. There’s turnover at the state level. A lot of state legislators are not full-time. They’re part-time workers. They have other jobs. And so, they often run very thin shops and they have very limited time. And so, it’s about really identifying the right person, getting in, showing them something that’s useful to their job, and getting out. Be there, be brief, and be gone.
Lloyd: Yeah. That’s great, yeah. And you kind of have a slightly unique perspective on what’s happening in AI policy at the state level. And so, I’m curious just with states kind of famously being the laboratories of democracy, what are you seeing that you find really interesting? Are there any states that are approaching AI policy in a particularly good way from your perspective?
Chilson: Yeah, there are actually. First of all, just for context, there’s a lot of states who are approaching AI policy essentially as a version of privacy policy. And so, they’re trying to adopt rules that look sort of like big comprehensive rules. But AI is a really amorphous term. It’s hard to define. And so, there’s a lot of states that are writing laws that I think they don’t really understand what the primary effects are going to be, and they’re modeling it often on privacy legislation or they’re modeling it on Europe’s AI Act, which was passed just a year or so ago. And Colorado is a primary example of that. They have a law that’s in place that they’ve been trying to figure out exactly how to implement. And so, that type of comprehensive, very big, amorphous structure I don’t find very helpful, especially if you have 50 different ones of those. I think that makes it actually quite complicated to comply with.
But there are some states that are doing targeted work on specific issues, and I’m a big fan, actually, Utah has passed an AI Act that created what they call a learning laboratory. And what that is, it’s an opportunity for developers who are in maybe a specific space that has regulation, but they’re not quite sure how their technology or their use of this technology fits into the regulatory boxes, to come in, work with the Utah Department of Commerce to form a sort of agreement about how they can move forward that will give both some assurances to the government about how they’ll proceed as well as some confidence to the company that they’re not going to get slapped down without some notice for some rule that they didn’t know how it would apply in their space.
The first example of this was a company that does a app to support teenager mental health. It’s not a therapy app exactly, but it is dealing with an issue where it seems like we probably need some innovation and some new ideas about how to help with this. But obviously, it’s also a pretty fraught space. You don’t want to mess this up. And so, they worked with Department of Commerce to move ahead with this process. And out of that came not just the ability of this company to develop their product in a way that is safer and under some guidance from regulators, but also the Department of Commerce came out of that with a proposed piece of legislation about mental health apps generally that they then proposed to the Utah legislature. And the Utah legislature adopted a form of that. And they were obviously well-informed about what a company actually goes through to comply with this and what would it look like and what are the questions that they really need to answer. And so, I’m a big fan of that type of process that actually uses real experience between the practitioners and the regulators to inform state legislation. I think Utah has been doing a good job in that space.
Lloyd: Yeah. That’s really interesting. So how did you get to be the head of AI policy at the Abundance Institute? What was your path that you followed into science and technology policy?
Chilson: I was always interested in computers, and as a kid I was very interested in computers and I was very interested in complexity theory. There was a book called Chaos by James Gleick that I was fascinated by, in part because you could take very simple… It didn’t take very much code to make really pretty pictures. You could make these fractals that were beautiful and easy to code. And I guess I was lazy, and so it was great. It was great. But what it taught me was that there are these systems out there that are complex, in the sense that you can’t reduce them to just their parts. You could understand the parts of them, but the way they work is more than just the sum of their parts and manipulating them is uncertain. When you affect them, this is the classic butterfly effect, you’re not really sure if you change one little thing, how big the final effect is going to be.
Anyways, so I went on, did computer science for a long time, did software development, got really interested in grad school in policy implications of certain types of computer science. In particular, copyright and patent law were big issues because this was the era of file sharing, where people were copying music and sharing it across their dorm rooms. It’s a free for all in that space. And I was very curious about how that worked. I got a cease and desist letter for something that I did from one of the recording companies. I thought that was interesting. And I was teaching this class in computing ethics.
So, I went into telecom policy instead, which was not a congressional fight. Telecom policy is primarily a fight at agencies like the Federal Communications Commission. And this is administrative law and it’s rule-makings, which are very policy heavy; which means you don’t have to necessarily dive deep on the law, you can just think really hard about what you think are good ideas. And that was very appealing to me. And also, the comment cycle was much shorter. You’d get something done in six months instead of 10 years. So that really appealed to me.
And so, yeah, I thought all the people who were doing cool stuff in this space were in law school, so I went to law school out of grad school for computer science instead of heading off to the West Coast. And so, there I discovered very quickly that patent and copyright policy were congressional fights. That meant you had to change legislation. That meant it was going to take 10 years, and you had very big parties on the other side, so you’re probably going to lose to Disney. That’s the way I like to phrase it. Just from a personal aspect, I didn’t feel passionate enough about that specific issue to really jump in.
A lot of the questions that we’re talking about in AI now are basically rehashes of all the policy fights I’ve been doing for 20 years, so it’s intellectual property, it’s disinformation, misinformation, it’s privacy, it’s data security.
And so, I did telecom policy for about eight years, and then I ended up getting invited to go to the Federal Trade Commission (FTC), which is the primary consumer protection and competition agency in the federal government. And there I got to work for one of the commissioners, and I learned a lot about consumer protection law, about how to make those tradeoffs, how to think about deception and unfairness and competition. And I ended up getting to serve as the chief technologist of the agency for part of the time that I was there.
And so, that experience was a really good opportunity to think hard about the big picture role of government. I learned a lot about when does it make sense for government to step in? And the work at the FTC is very satisfying because a lot of it is stopping fraud, which is unfortunately rampant. And also, you kind of can feel like the good guy at the end. Lots of policy is much more nuanced than that. There aren’t really good guys and bad guys, but when you’re stopping fraud, there’s pretty clearly some bad guys involved. So that was pretty nice.
And much of the work that I did at the FTC was data privacy and data security driven, which a lot of the questions that we’re talking about in AI now are basically rehashes of all the policy fights I’ve been doing for 20 years, so it’s intellectual property, it’s disinformation, misinformation, it’s privacy, it’s data security. So that was a pretty natural transition from the space I was working in to artificial intelligence, especially given my technical background.
Lloyd: In working in policy at the federal level, how important was it to be in Washington, D.C.? I’m right to think that you went to law school at George Washington University in Washington, D.C.? Was that consciously to be close to federal policymaking?
Chilson: It really wasn’t. It was primarily because I got waitlisted at Stanford, to be honest, and so my life would probably look really different. But you’re right totally that the affordance is very different. Had I gone to Stanford, would I have ended up in D.C. doing policy? There’s a shot since I was interested in those topics, but it’s a much more of an obvious track when you’re in D.C. already. And so, I maybe was not as conscious about such things as I should have been. I think in retrospect, it’s pretty clear that being in D.C. helped shape me into more of a policy lawyer, more of a public interest lawyer, and less of a working in industry or something else. So yeah, that definitely had an effect.
Lloyd: Yeah. And I would imagine just the people you run into and network with, talk to, a lot of those folks are going to be also interested in policy and working in policy at the federal level if they’re in D.C.
Chilson: Yeah. One thing, though, that I would say is that I was in grad school and I went to law school, like I said, because the people who were doing policy work were in law school. And at the time, that was the right path.
It is easier than ever to build some policymaking skills and some advocacy skills without going to law school and without necessarily being in D.C.
I think that has changed quite a lot. It is easier than ever to build some policymaking skills and some advocacy skills without going to law school and without necessarily being in D.C. And we’re seeing a lot of what I would call policy entrepreneurs in the artificial intelligence space. So there’s a lot of organizations that have sprung up to do policy with people who are not necessarily deep in the weeds in D.C., but who have a real interest in policy.
And that brings trade-offs, right? There’s a lot of tacit knowledge that they need to learn about what policymaking looks like. But it is like a new flood of talent, and often they have a background that traditional D.C. people don’t. And so, it’s an interesting mix. And so, for listeners who are interested in getting into policy, I would say there’s a lot of paths actually to that now, in a way that there weren’t in the past where the traditional paths very much were go to law school primarily.
Lloyd: We’ve talked a little bit about this, but maybe just to articulate it more specifically, what are the big questions that motivate you to do this work?
Chilson: I’m an optimist about technology. I’m a big believer that we’re a problem-solving species. We see something, we want more. We want to solve the problem. And we now have really powerful tools to do that, and we keep inventing new tools to do that. And so, I am very motivated about how to keep that process robust, strong, going.
I think that on both sides actually, there’s a vision both of dystopia that we’re facing doom and that technology is contributing to that. And there’s also this accelerationist utopian vision that we’re facing the perfection of humanity and technology is going to get us there and that we have that opportunity. I think both of those visions are really wrong, and both of them are static visions. They’re both visions of the world reaching some point and never changing again.
When you’re dealing with complex systems, the unintended consequences of believing that you have control can actually magnify them.
And that sounds really boring to me, but also I think it’s just incompatible with what humans are and what human society is. I think we’re problem-solving creatures, and no matter what technology delivers for us, we’re going to find new problems to solve and we’re going to see that it doesn’t solve everything. And that’s the motivation to get onto the next technology, the next thing that helps us build something new and do something more powerful. And so, for me, my job, the big question I have is how do I keep that process robust? How do I keep that desire to solve problems robust, both in culture and also create the right policy and environmental incentives to continue to try to solve problems? So that’s pretty abstract, but that’s the overarching question. How do we keep that process solid?
For me, one of the underlying questions to that is how do we keep people aware that this is a complex system, to get back to what I was talking about before; that human society is a complex system. Government is a complex system. All of our organizations are. And in complex systems, you can have influence, but control is limited. And so, we often think about government as a form of control, but when you’re dealing with complex systems, the unintended consequences of believing that you have control can actually magnify them. And so, having some humility about what government can achieve or even what individuals can achieve when they’re trying to work from a framework of control, conveying that to people is really important to me. And so, figuring out how to say that in a way that both pull the examples but also convey it in a way that’s persuasive is really important to me, because I think it helps us better achieve those outcomes of having science and technology actually create the flourishing that I think everybody agrees would be much better than the alternative.
Lloyd: Yeah. So to talk a little bit more specifically about your role in AI policy, it sounds like you’re describing artificial intelligence—and correct me if I’m wrong—as primarily a tool for problem-solving, a very effective tool that we’ve developed for solving problems. And I’m curious how you see it solving some of the societal problems that we have in order to enable the social flourishing that you hope to see.
Chilson: Yeah. So artificial intelligence has been around for a long time. The term was coined in the 50s, and so when I say it, I think people kind of assume now I’m talking about ChatGPT, but obviously the technology is much older. And in fact, I think it’s actually most useful to think of AI as essentially just cutting edge computing. And so, I do see it very much as a tool that can help us do the things that computers are good at. And we’re finding out that computers with some of these new algorithms are good at a new set of things that they weren’t good at before.
I think it’s actually most useful to think of AI as essentially just cutting edge computing.
And so, the specific set of problems that we’ve discovered at this point are taking huge amounts of data and identifying useful patterns in that. The large language models, the chatbots, they’re doing that with language or they’re doing that with images, but they can do that with all sorts of unstructured data. And so, when we’re talking about things like whether it’s CAT scans or medical histories in the aggregate across a lot of people or DNA sequences, there’s a lot of areas in which we have these big complex systems where it’s really hard to see what the connections between different things are. AI and these large language models and machine learning generally is very good at pulling out that type of connections and letting us do experiments and learn from them quickly.
And so, I see a huge amount of promise in the medical space, as maybe some of my examples have already indicated. That we’re going to be able to identify both healthy and unhealthy patterns that we haven’t been able to really see before, and we’ll be able to make new connections in that space. And so, I’m very excited for what it means for both living a longer life, but living a healthier life the whole time that we’re alive. And so I’m very excited about that.
I think there’s lots of other areas in which there’s a lot of drudgery in typical knowledge work right now. And so, I think AI is very good at reducing some of that drudgery. And so, companies are still very much trying to figure out how to deploy these technologies, but getting rid of drudgery is just another way of saying increasing productivity and letting people focus on higher value work. And I think there’s a huge opportunity in that space as well. We’re already seeing it happen in call center innovation. We’re seeing it happen in accounting. We’re seeing it happen in transcription in medical settings. And so, I think there’s a lot of boring work that we can replace with AI that’s going to make people’s jobs both more effective and more interesting. So that’s another area where I’m really excited.
Lloyd: Yeah. Well, that’s a great place to end. We’ll end on the excitement for our AI future. Well, thank you Neil so much for joining us today and for talking through your job and how you got there and some really interesting things to think about for the future of AI.
Chilson: Well, thanks for having me. I’m excited to be here, and it was great to chat.
Lloyd: Check out our show notes to find links to more of Neil Chilson’s work and other resources about AI policy. Please subscribe to the Ongoing Transformation wherever you get your podcasts. And write to us at podcast@issues.org.
Thanks to our audio engineer Shannon Lynch, and producer Kimberly Quach. I’m Jason Lloyd, managing editor of Issues. Thanks for listening.