By Rajesh Kasturirangan

With this conversation, AI is explored in terms of its potential impacts, many now being realized as we go into production, on politics and the social class structure. We assume a global framework even though national frameworks are most pertinent when considering elections. Examples are drawn from India and the United States. – Editors

Editors: Can we address AI in two parts. One part is where you just explain to us what AI is and how is it different from the earlier stages where we had algorithms determining what we would see and what would go viral. That would be the first part in general – how AI different from previous forms of information technology. The second part is about AI as applied to elections, whether it is in the United States, but also all the pioneering developments that have happened with say, Modi’s internet army in India?

Rajesh Kasturirangan: Let’s talk about it!

Editors: Over the last few years, we’ve become used to the debate about how social media is affecting elections, about the possibility that a group in London could influence our experience of Facebook resulting in us voting for Trump over say Hillary Clinton. How is AI implicated in these kinds of things, or are these different phenomena?

Rajesh: There isn’t a general answer because the word AI of course is a polysemic concept with many different tweaks. But one way to think about it is that, instead of “hard coded” algorithms, i.e., algorithms that might tell Facebook exactly how to privilege one kind of post over another, in an AI driven world, it could be much more personalized. It could be that the depth–especially with an organization like Facebook that has the data—is the most crucial thing. They have data that can be used to tweak what people see and the degree of personalization, the degree of shaping of what you see on your feed, is going to be that much greater with AI. And not only what Facebook shows you, but how Facebook… I mean, this is ultimately an advertising company and how they expose your data to others and how people query their… I mean, I’m not privy to how Facebook sells ads, but I’m assuming that AI will also change how those marketplaces for ads are both created and regulated.

Editors: So, if I were to take what you said, in the past, there were general algorithms that targeted categories of people. It was too labor-intensive to develop algorithms to get it at such a more refined level at the individual. With machines making these decisions, then the targeting can happen per individual.

Rajesh: Absolutely. And let’s think of something which we would generally see as a positive development. There is so much hatred, child porn, and the like that a company like Facebook has to deal with are immense. And they employ hundreds and thousands of people to just see if a post or a particular poster is producing content that should simply not be there. And you can imagine that with AI, this detection gets that much better. I mean, spam again is a good example of that.

But AI will not just that it’ll become more personalized, which it will, it may be even contextually personalized, which is to say that if you are posting happy thoughts, it may actually personalize content that is targeted to somebody who’s both you and happy. It’s a degree of access to your current mental state, that seems to me to be the most alarming possibility.

Editors: So, it could have the effect of throwing gasoline on a fire. If I’m posting angry thoughts about certain things, it could cater some content to me and aggravate the circumstances for me.

Rajesh: Right, and given that emotional contagion is something that is best, for better or worse, best done in the moment, if some inciting incident happens and people are posting angry thoughts, then AI has a way of getting those thoughts in front of other people who are also angry about the same thing at the same time. And some of that can be hard to do without the existing tech.

So imagine you are posting angry thoughts in one kind of way, and I am posting angry thoughts about the same event, but not using the same words, the reference isn’t clear, but natural language processing, the kind of large language models that you have today, could reasonably assess that we are talking about the same things, which would’ve been very hard to do without a human being in the loop earlier. So it’s often this kind of matchmaking that the AI thing is going to do much better than before.

Editors: It’s very tempting to jump into the elections directly now, when we look back . . . going back several decades or so, a little bit more in the campaign of George Bush Sr. against Dukakis it was the notorious Willie Horton advert that played on a certain meme about crime, about African-American people, about African-American males, specifically. Trump is now doing the same thing with respect to immigrants and crime, and murder in Georgia, etc. What are the possibilities with AI and this phenomenon?

Rajesh: So again, it’s personalization that might be the real danger. I don’t know if it’s happening, but if it could. Imagine it’s not Willie Horton who’s the same African-American figure that is being shown to everyone. Given how mass incarceration works in this country wherein nearly every single county in the United States has incarcerated African Americans, you can imagine that the person whose photo is on that particular ad is that of the person in your county.

And not just that, there are synthetic images that are of that person, maybe even in your neighborhood, because your geographical location will also potentially be available to these advertisers. Just imagine being shown a photo of someone you anyway find threatening, but they are being shown two doors away from you or on your street, rather than just as a perp walk kind of face. So that I feel AI can do now, and whether it can be done in the next six months or not, it is going to happen.

Editors: Rajesh, we’ve read about similar kinds of things happening or beginning to happen in places like Brazil, Argentina, and India. Are there any international examples that you can think of?

Rajesh: Actually, the one, and in some ways positive version that I’m most taken by now is how Imran Khan in Pakistan used AI for campaigning. He was in jail, but he was able to use AI-synthesized videos as a very effective campaigning tool. And as you know, his party did a lot better in the recent elections than anybody would’ve given them credit for. And so that’s the kind of thing… I mean, people are talking about, certainly in India, I don’t know enough about what happened in Brazil or Argentina to say one way or the other, but you can imagine that you have deep fakes where you have an opposition politician’s speech being slightly modified to express support for the current government, or whoever is their opponent.

And this doesn’t need to be… Because the media landscape is so fleeting, you can imagine a fake video being produced, being very quickly distributed and then vanishing, but it’s done its job. And I feel like that, I think, is definitely a possibility. And I can imagine, again, that the really deadly combination won’t be fully artificial media. We have heard about what’s called infamously the IT cell in the Indian context, where you use AI to target very specific, let’s say, WhatsApp groups or other geographies. But you do that with an actual human being. So the script that they are making a video from is artificially generated. But there is an actual human being who is creating a video or some other media form. I mean, that’s happening for sure.

Editors: With this particular example, you’ve transported us to the actual practical creation elements with the AI. When we look at the United States right now, if I log on to ChatGPT, for example, they have an explicit prohibition on creating politically any kind of discourse around politics essentially, or very controversial topics. Are there ways around that?

Rajesh: Well, Lama, which is a competing Facebook-created large language model, is open source. So if you are willing to download your own open-source version of these models and tweak it for your users, there’s nothing stopping you from doing that, right?

And in fact, I know this because again, for positive reasons, you don’t want to be dependent on open AI for all your AI needs. Many countries, I know this is happening in India, I’m sure it’s happening in other places, are creating large language models for their languages, right? India has so many languages. You may even argue that outside of the Anglosphere, the best models may not be open AI. They might be ones that are very much controlled by the national authorities of those nations.

Editors: When we think about the United States, the fact that any medium-sized city probably has 50 to 150 languages spoken in its school district. Do you see then the possibility of a lot more outreach, particularly to New Americans? So-called New Americans or immigrant groups using these large language models to generate, say, Spanish? Or for that matter, Hmong-related content?

Rajesh: I mean, if I put on a Dr. Evil hat, I would say that given that the elections are going to be quite fiercely contested, I can imagine trying to sway some very, very hyper-targeted populations. So imagine trying to sway… So if you can get, say, Arab Americans in Michigan away from the voting booth, so not even vote for Trump, but they’ll just not show up, right? By targeting them with Arabic language ads, that I would say justifiably… I mean, talk about the genocide that’s happening in Palestine, then that would be so much easier to do with AI, right? Because the speed synthesis, you could have somebody who’s a Republican operative sitting in D.C, create those ads without too much effort.

And I can again see the same thing happening in Spanish targeting… Especially I would think targeting Spanish-speaking men who we know have been moving in a more conservative direction in the last few years. So I do, I feel like there is a business model to be created because it’s possible to do this at scale without too much expense.

Editors: If we were to think for a moment about two election-related things at very different levels, on the one hand, do you see the possibility for any kind of regulation impacting these uses? And if there is a possibility for it, is it too late in any case? And what are our legislative agendas that we should have with respect to AI?

Rajesh: I feel like it’s not my domain of expertise to really think this through. And the reason I’m saying that is from a distance, as a person who doesn’t know the law that well, in fact doesn’t know the law at all, but is fascinated by the speed at which AI is moving, I don’t see how, at least in this election cycle, it’s going to be regulated in a way that we would want it to. Precisely because… I mean, anybody can download those… The stakes are so high, I don’t expect the actors involved to heed the regulations that are going to be put into place to heed the regulations that are going to be put into place, to be honest. I definitely don’t see that in other parts of the world. I don’t see that happening here either. I would say that if we want regulation that is going to really work, it has to engage with the technicalities of the algorithms, in my view… I mean, the models, and what kind of data sets are they allowed to use, all of those things. And that I feel that we don’t have time. It can be done, but I just don’t see happening this time around.

Editors: Tabling that for a moment then, what about the opposite end? I’m an individual, right? I hear a lot about politics, but I’m not really connected to any political party. I am offended by some of the things that some of the candidates are either saying or the policies they’re implementing, and I want to maximize my impact. I want to reach all my friends who I went to college with and reach family in Pennsylvania, Wisconsin, Georgia, all the swing states. Can AI help me?

Rajesh: I’m not sure if AI would help me do anything more than what I am already capable of doing. Let’s say I’m this person that you’re saying, my self-image is I’m not that political, but I’m really disgusted with X, Y, Z things that are happening, and I want to be able to say that. One way to do it, and unfortunately, I think that mythical person won’t do it, but one way they could use AI is to take their half-formed thoughts about what they like and don’t like and turn it into reasonably readable prose, which they could then start sharing.

Editors: And you see a role for AI in that, in reworking prose?

Rajesh: Absolutely. For example, let us say that, again let’s say I’m an Arab-American in Michigan, and I want to on the one hand say what the Biden administration is doing in abetting the Israeli government in so many ways is deeply wrong, and yet they want to be able to say that. However, for our own communities flourishing in the United States, Trump would be worse. To make that nuanced argument is hard for most people, but I think AI can help you make that argument in a way that would be accessible to you as well as your peers.

Editors: Listening to you, it sounds to me like if I’m that individual I can develop a script or something, I can have it grammar-checked and work out the different nuances and iterations on what I’ve just written very quickly using AI. But then I would be well-advised to reach out to a trade union or a special interest group that seems to speak to my values and try to see what they have to amplify my own voice.

Rajesh: And you can imagine, again, if that trade union or special interest group was proactive, they could reach out to these constituencies and say, “We will help you craft those personalized messages.” Right?

I mean, if I say, “Take this survey, tell me how you think about X, Y and Z,” and then use AI to spit out again what will be reflected back to this person as a reasonable representation of their view, which they can then also use in their communities.

Editors: If you were to try to formulate some general advice for people who are concerned about AI and tailor that advice in a way that would apply to helping them think about AI itself and which candidates they should be supporting across the board, how would you approach it? Where would we begin in providing that advice?

Rajesh: One thing I think about quite a lot about AI is that because it’s such a capital-intensive technology, it tilts the field even more in the favor of the biggest monopolists. And to that extent, there is a fascist takeover of AI waiting to happen. I would therefore pay close attention to what my representatives and others are saying about how AI… And again, because it’ll be controlled by monopolists who will, for either political or economic reasons, be selling it to other capital-intensive customers, what I would be really looking for is how to democratize access to these technologies.

What should be the logic of that? Because I think open source doesn’t work. I mean, if anything, these very large corporations have used open-source data sets, open-source software to create the AI that they then control.

And so what politics will genuinely liberate AI from this monopolistic control? I don’t have an answer to that question, but what I would be looking for are people who say that they want to pay attention to this and that they want to bring that into democracy rather than have it be a technical question.

Editors: So that people may think intelligently, forgive the pun, about AI, I wonder if you can explain some very basic concepts here to help us understand specifically why it is so capital-intensive. A lot of people think, “Well, I just type something into an interface and it produces this language.” We’d like you to delve into the back end for us. If you could explain what large language models are and the capacity to build such a data set and the computing power it takes. For example, we’ve read something about the projected impact of AI in terms of power consumption just being enormous. Could you say a little bit more about all of those elements, which then I could paste at the beginning of the interview?

Rajesh: There are at least three things that come to my mind about why AI is so capital-intensive. One is, of course, access to proprietary data sets. There are open-source data sets, which are also very large, but of course they’re available to anybody. To make use of those open-source data sets, cost computing, which will be the second thing, but just-

Editors: If we could interrupt you right there, what’s a “data set”?

Rajesh: A data set could be storehouse of images of people across the world, or it could be every single post on Twitter ever. The first is likely to be a little bit more open because you’ve collected the images from the open web, but Facebook or Instagram would have all these proprietary images that they’re not going to give access to others.

Editors: So they would have these mind-mindbogglingly large data sets.

Rajesh: Exactly. Exactly. Videos, images, text, all of these things. And-

Editors: And the metadata like the timing of posting of those images and locations, etc.?

Rajesh: Right. So data is expensive because the very large data sets you need to train these models are partly open and partly private, and it’s the very largest private companies who have the largest data sets. Your Facebooks and Amazons and Googles are the ones who have that. So that’s one source of what in the language of Silicon Valley it’s the moat, right?

So that’s one. Then the second is, of course, the cost of the hardware and the computing that it takes to crunch through these data sets. So you have very large models, so imagine a model with billions of parameters, and you’re feeding it all this data. You have to tune the model till you get to whatever success metric you have. So that costs money because it costs computing. And then the final-

Editors: Give us a sense of the scale. We just created my little 4900K Intel gaming machine with 96 gigabytes of RAM. Is that big or small?

Rajesh: That is small for these purposes. Yeah, it’s too, too small for this state. I mean, of course-

Editors: So it’s too small by orders of magnitude, right?

Rajesh: We are talking of petabytes for sure, and a petabyte is actually a million gigabytes. So you go from gigabytes to terabytes, that’s 1,000 gigabytes, and a petabyte is another 1,000. I think that it’s only when you’re peta-scale computing that you become a serious player in this business.

Editors: You’re talking about at minimum server-farms-scale things,

Rajesh: Exactly. And millions of dollars for investment. But I actually think that the most expensive and therefore the most capital-intensive thing is people. I mean, if you are an in-demand AI engineer, Google will pay you over a million dollars a year, right?

I mean, there are not that many such people, but let’s say if there are, you need 1,000 of them. I mean, then you need a billion just in salaries for these people, all right?

I mean, if you think of OpenAI, which is the most talked about AI company, I suppose, it’s got only a few hundred employees for its valuation. I read that it’s raising money at an $80 billion valuation. Now, if it has only a few hundred employees for that kind of valuation, those must be extraordinarily expensive employees. I mean, a union cannot pay even one of those AI engineers.

Editors: Okay, this makes sense. As you think about this bigger topic then of democratization of this capital-intensive technology, what do you see as the road to that kind of democratization? Is there a broader societal transformation that is both dependent on that transformation and that will drive that transformation?

Rajesh: I think so. And I’m saying that part of it… I mean, I’m just thinking what is, in some ways a poor analogy, but let’s take the comparison to the nuclear bomb, which was the previous kind of world changing technology that everybody was afraid about. You don’t want to democratize access to it, but you don’t want them in the first place. So that was the politics of the bomb. It’s not clear to me what the politics of AI should be, but I would say that it has to go hand in hand with much wider access to the nature of the technology itself. I mean, if the comparison is with something like literacy, you cannot have print, a democracy that depends on print if you don’t have enough literate people. So I think that one of the fundamental transformations that has to happen is widening access to these technologies, but not just like I have as much access to ChatGPT as you do, but that I use it productively in my expression of myself in a way that makes it just as we would do with writing.