Hard Fork

“Hard Fork” is a show about the future that’s already here. Each week, journalists Kevin Roose and Casey Newton explore and make sense of the latest in the rapidly changing world of tech. Listen to this podcast in New York Times Audio, our new iOS app for news subscribers. Download now at nytimes.com/audioapp

Mr. Altman Goes to Washington + Casey Goes on This American Life

Mr. Altman Goes to Washington + Casey Goes on This American Life

Fri, 19 May 2023 09:00

In a congressional hearing this week, OpenAI’s chief executive, Sam Altman, appeared to be on the same page as lawmakers: It’s time to regulate A.I. But like so many other proposals to regulate tech, will it actually happen? The Times’s technology reporter Cecilia Kang helps us understand whether Congress will actually act, and what that could look like. Then, Casey talks with Twitter’s former head of trust and safety, Yoel Roth, before and after Elon Musk took over the company. Plus, how to moderate a social network. Hint: It’s hard.

Listen to Episode

Copyright © © 2022 THE NEW YORK TIMES COMPANY; The New York Times encourages the use of RSS feeds for personal use in a news reader or as part of a non-commercial blog, subject to your agreement to our Terms of Service.

Read Episode Transcript

Casey, can I show you an app? Show me an app. Okay, this is a new app called New York Times audio, and as you might guess from the name, it is a New York Times audio app. So this time, the app is coming from inside the house. We're sitting in right now. Right. So this is a new iOS app. It's for New York Times subscribers. It's called New York Times audio. Our show is on this app, as well as a daily playlist of news. It's got narrated articles. It's got podcasts from this American life, serial productions, and the athletic. And I've been using this app as a beta tester for a while now. And it is really good. All right. I'm going to get on the Wi-Fi and do this. I believe in you. Now, this I'm looking at this for the first time and I'm scrolling. And let me tell you what I'm seeing. I'm seeing Turkey's president fighting for political survival. I'm seeing how $89 million of phone donations disappeared. The Daily's on here. Small Niche podcasts. And I actually am seeing a new episode of Hard Fork. So when I see that Kevin, when I look at that, I think I'm getting everything. And there's articles, narrated articles, there's narrated articles. And it's the actual reporter taking time away from doing journalism to read it to your lazy app. That's all happening in the New York Times audio app. And it's free if you subscribe to the New York Times. You can also, and this is a very exciting feature for me, you can choose between eight different playback speeds ranging from 0.8x all the way up to 3x. And if you are listening to Hard Fork on 3x, I actually do want to hear from you. That's too fast. I'll say that's too fast. Yeah. I mean, I kind of want to listen to it on 3x now just to hear what it sounds like. This is 3x. Yeah, our laughter and 3x. It does not sound great. No, it does sound like shipmarks. So if you want us to do Hard Fork or any other show from the New York Times, at any speed ranging from 0.8 to 3.0, you can download New York Times audio at nytimes.com slash audio app. And you better. I think we should try to record the show at three times speed today. Let's do it. Yeah. Because I've got one to get to happy hour. I'm Kevin Ruiz from New York Times. I'm Casey from PAFOR. Here's our Hard Fork. I'm Kevin Ruiz, a tech columnist at the New York Times. I'm Casey Newton from platformer. And you're listening to Hard Fork. This week, the New York Times Cecilia Kong talks to us about why lawmakers are co-ziying up to open AI CEO Sam Altman. Then, Twitter's former head of trust and safety, Yoel Robb, talks to me about his battles against Donald Trump, Elon Musk, and other forces conspiring against content moderation. Casey, the big news this week in tech was not in California, but it was in Washington, DC, with a big Senate hearing about AI and AI regulation. Testifying most notably was former Hard Fork podcast guest Sam Altman, CEO of OpenAI, along with two other AI experts, Christina Montgomery, who is Vice President and the Chief Privacy and Trust Officer for IBM and Gary Marcus, who's a Professor Emeritus at NYU. Did you watch this hearing? Well, I was on a plane to New York to meet with you in the studio. So I missed it, but more than anything, I was a surprise that we're already here. Congress is talking about it. It's not just Don Byr anymore. Congressmen, who we interviewed on a previous episode went back to school and said, yeah, Congress is paying full attention to this. I think that's a good thing. Yeah, social media had existed for 10 or 15 years before the first congressional hearings, where Mark Zuckerberg and other CEOs were called to testify. ChatGPD came out last November. Right. And we're already having congressional hearings about it. This thing is moving so quickly, and lawmakers are really trying to get their heads around it. And so this week we got a glimpse of basically the first time that a congressional hearing has addressed this issue of generative AI and some of the risks and promises that the technology has. One of our colleagues for the New York Times, Cecilia Kong was following along with the hearing. Cecilia covers tech policy and regulation for the times. Cecilia, welcome to Hard Fork. Hey, thanks for having me guys. So Cecilia, tell us about this hearing, often with hearings in Washington that I and Casey have talked about, the hearing sort of come after some tech company has done something really bad or really spooky, like the Facebook hearings over Cambridge Analytica, Twitter's hearings over content moderation. There's something that sort of gets screwed up, and the executives are called to Congress to testify about it. There's like a smoking crater at exactly like what made that? So in this case, what did make Congress eager to have a hearing with the CEO of OpenAI and to other experts? Yeah, well, I think it helped that when chat GPT by OpenAI was released late last year, everybody was trying it. And that included lawmakers in Washington. And so they were running speeches on chat GPT. They were conducting experiments and they just had their holy-ass moments of, wow, this thing can do what I'm paid to do and elected to do, which is to give speeches and to have positions on policy. And this is scary. So I think it's struck personal. Interesting. It's almost like how social media hearings really dialed up after like politicians started using them for their campaigns because they were like, oh, this affects me and my job and my constituents. And I'd also say that there is a recognition in Washington that Congress has completely failed when it comes to regulations of social media. And there were a lot of nods to this during the hearing yesterday that lawmakers were saying we don't want to make the mistakes that we did of the last few years, which was to talk a lot about regulating and not doing anything. So they are trying to be faster and looking around corners. Which I find very heartening, I have to say, right? Because I'm somebody who likes Cecilia, sat through those hearings and saw Bill after Bill and then nothing happened. And with some of the risk around AI, I think we do want to see the moving faster. So I actually found it gratifying that they were moving here. Yeah. And speaking of those hearings, we have some clips, sort of like a blooper reel of congressional tech appearances. Yeah. If you missed the past few years, I think these clips are really sort of showcased the knowledge and perspective that Congress brought to the discussion. How do you sustain a business model in which users don't pay for your service? Senator, we run ads. I have a seven-year-old granddaughter who picked up her phone before the election. And she's playing a little game, kind of game a kid would play, and up on their pops a picture of her grandfather. And I'm not going to say into their record what kind of language was used around that picture of her grandfather. But I'd ask you, how does that show up on a seven-year-old's iPhone who's playing a kid's game? Congressman, iPhone is made by a different company. Just to talk, access the home Wi-Fi network. Only if the user turns on the Wi-Fi, I'm sorry, I mean, I understand that. So if I have a TikTok app on my phone, and my phone is on my home Wi-Fi network, does TikTok access that network? It will have to access the network to get connections to the internet if that's the question. Three classics. That was of course Mark Zuckerberg, Sundar Pichai, and showtoe of TikTok. Yeah, so this is sort of the kind of tech hearing that we've come to expect from Congress. I would say the tone of most of these hearings has been most similar to like a a genius bar appointment with like a very confused customer, but like an angry genius bar appointment. You know, and that's the thing, these lawmakers, when they ask their bad questions, they ask with so much anger and confidence in their questions. So I would say not totally reassuring. Can I just underscore that point? Because it's like that is the most emblematic thing is like I'm going to ask you a question and I've never been more mad and I also have no idea what I'm talking about. Precisely. Yes, my iPhone is on the fritz and it is a personal affront to democracy. So this hearing however was a little bit different. Yeah, so I was really struck by how not adversarial this hearing was and how lawmakers were very friendly, particularly towards Sam Altman. They were really approaching him like he was like a professor. Like come educate us, Sam Altman on this technology and tell us how we should regulate you. The posture was so different. There wasn't the sort of performative. I'm so angry your company's terrible for democracy. Sort of tenor in the questioning. There was a lot of doomsday in a lot of what the lawmakers were saying. They're projecting concerns about what artificial intelligence can do and reek on the economy and in society. But they were looking to Sam Altman for answers and for guidance and that was a very different thing that I've seen and surprising from hearings and past and also the questions weren't dare I say terrible. They were pretty good. Yeah, they were not and they were not terribly deep but they're not terrible. They were not off. You know, they wasn't asking the CEO of Google how an iPhone works. Yeah, so I'm really curious about why the tone might have been so different and I think one thing is that we're early enough that there has not been a huge calamity yet. There is not a smoking crater that everyone is mad about. But also I wonder if the fact that OpenAI started out as a little bit more of a research lab than a big consumer internet company might help. And I also wonder did Sam and the other folks at OpenAI spend a lot of time leading up to this trying to get ahead of the story. Yes, that last bit is I think key. Sam Altman has been in Washington multiple times. Just this week on Monday, he was having dinner with 60 lawmakers on the house side. He gave a presentation. People left the meeting and told me they thought they were super impressed with him and how he explained how the technology works and how he seems so cooperative. So that was just one example of many of his meetings. He's given personal demonstrations to many members of Congress and their staff. So he's accessible, which is very different than the early years of big tech titans who would come to Washington only under duress to come to testify and they never wanted to engage with Washington to talk about how their technology worked and they were very defiant and defensive that their technology could not be harmful at all. And I think Sam Altman has a very different sort of view. He has a very balanced approach to how what he's making can be harmful but also have lots of opportunities for good. Yeah, that was really one of the most remarkable pieces of this hearing to me was they really didn't shy away from talking about some of the downsides and the risks of AI. And I really wanted to unpack that with you because I thought there was some super interesting moments. So we actually pulled some clips from the hearing and edited them down for clarity. And I thought we could just kind of listen to them and then just talk about what happened. So this first clip I think is about one of the concerns that really drew the most attention during this hearing, which is the kind of medium and long-term risks of AI and how it could impact not just jobs but humanity as a whole. So this came from a moment where Senator Richard Blumenthal from Connecticut was asking all of the witnesses what their biggest nightmare is with AI and they sort of went one by one. And you know, Sam Altman said something about jobs. And then at the end of Gary Marcus' answer, he pointed out that Sam Altman kind of had skirted the question. And last, I don't know if I'm allowed to do this, but I will note that Sam's worst fear, I do not think, is employment and he never told us what his worst fear actually is and I think it's germane to find out. Thank you. I'm going to ask Mr. Altman if he cares to respond. Yeah. Look, we have tried to be very clear about the magnitude of the risks here. I think jobs and employment and what we're all going to do with our time really matters. I agree that when we get to very powerful systems, the landscape will change. I think I'm just more optimistic that we are incredibly creative and we find new things to do with better tools and that will keep happening. My worst fears are that we, the field, the technology, the industry caused significant harm to the world. I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that. So this clip really speaks to one of the central tensions. I feel like in the conversation about AI as a whole is like should lawmakers be focused on the near-term risks of AI that we can see now, things like disinformation, propaganda, bias, people churning out news stories using chat, GPT or students using it to cheat on homework or other misuses of this technology. And then there are people who think, well, actually the bigger risks and the ones we should be regulating to try to prevent are the long-term risks. The danger that AI could get so powerful that it could actually destroy or disempower humanity as a whole. So how did you, when you were listening to this hearing, how did you hear the lawmakers and the witnesses sort of grappling with this tension between near-term risks and long-term risks? Yeah, and I'd say that all of it was discussed. And some specific near-term risks like copyright infringement, as you said, like definitely election interference, like on a scale that's so much greater than we saw on social media. Those were things that were discussed in detail, and there's great concern for that. The problem with the long-term risks is aside from using the kinds of words that Sam Altman did, which were kind of vague, like this could be really terrible for humanity. It's hard to be specific about this very hypothetical, cataclysmic result of this technology. So I truly believe that lawmakers have a hard time grappling with the long-term risks beyond what they read in sci-fi. Not to say that some of that stuff is not legitimately, you know, like things that are concerning. But for them, when it comes to policy making, it's harder to grasp on to those bigger, sort of more ambiguous and broad concerns that aren't specific. Well, there's this term that people are starting to ask me about, and I wonder if you've heard this one. Have you heard of the term P-Dume? Yes. Okay. P-Dume. So P-Dume is what the AI safety people use to call the probability that AI will cause doom, right? A superhuman intelligence, sort of subjugating humanity. And in the AI research community, there are people who think that probability is like 10% or higher. And so I bring it up because if you're Congress, we might want you to have a personal P-Dume, and you might want to have a sense of like, if you think the P-Dume is like 10 or 20%, then maybe you do pay more attention to that than like, how is this thing going to affect the next election? Right. But also maybe not. I don't know. It's hard. I was having lunch with some AI safety folks the other day, and everyone was going around the table and saying, like, what's your P-Dume? What's your P-Dume? This is like, it's still at Calvalle's latest parlor, but I know. So Congress was evaluating its own P-Dume. What other concerns did the senators at this hearing bring up about AI? Yeah. I mean, they did talk about these specific things related to how synthetic media could be used to create fake videos, fake, you know, audio clips. And that is a big front and center concern in Washington and actually across the world right now. It's clear that everybody who has tried chat to PT or other chatbots or other AI tools, such as Dolly, you can see like the potential for like massive fake misinformation everywhere, just a flooding that we haven't seen yet. So that was just quite a bit. There was a concern by Tennessee Republican, Marsha Blackburn, about how music clips from, she's from Tennessee. So she she represents Nashville. Musicians are just super upset about how their music is being used over and over again. We see this also with Getty Images. There's actually suing about how images are being used that aren't fair use cases. So there's a lot of discussion around whether copyright needs to be reformed. There was concern generally about like, who should be held liable if there's something that's said that's false about you. And there were some proposals that were that were discussed on what regulation could look like. Yeah, let's talk about those proposals because one of the things that was brought up during this hearing was how Section 230, which is the law that shields tech platforms from legal liability for user generated content. Like you can't get sued if someone posts a nasty comment on your blog. Whether Section 230 should apply to generative AI programs like chatgbt should open AI be liable if chatgbt for example, you know, tell someone to do something really harmful and they go out and do it. So this next clip is from Senator Dick Durbin asking Sam Alman how we should think about Section 230 in relation to generative AI. And here's what he said. I don't know yet exactly what the right answer here is. I'd love to collaborate with you to figure it out. I do think for a very new technology, we need a new framework. Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well. And also people that will build on top of it between them and the end consumer. And how we want to come up with a liability framework there is a super important question and we'd love to work together. So it sounds like what Sam Alman is saying is no, we don't want Section 230 to apply to generative AI because in some sense it's not exactly like a social platform. It's more like a tool. You know, we don't hold Microsoft liable if someone writes something crazy or liable is in a Microsoft Word document. That's just like not how our legal framework is set up. Is that how you interpret it? Well, I'm actually, I interpreted this as him saying that Section 230 was meant for platforms to be shielded from lawsuits against things that happen on their platform that they don't create. They don't intend to create. So in a way, he's inviting more scrutiny and potentially litigation. So that's how I read it. And in fact, we're hearing more people in Washington say that actually Section 230 should not apply to AI like Lena Khan, the chair of the FTC has said, we're looking really hard at AI when it comes to fraud and consumer protection. And we don't think that AI is protected by Section 230. Yeah. And I mean, this makes sense to me because if you're on a social platform, and a person wants to like exercise their speech rights and defame someone, I do think the liability should fall more on that person than the person who like created a text box, right? There are some nuances there that we could get into, but that's at least basically how I feel. If on the other hand, to use your example, Kevin, you want to use the Microsoft example, it's like, well, if Microsoft writes half the document for you and the document that Microsoft's technology wrote defames me, then it does seem like Microsoft might bear some responsibility for that. And in fact, we've started to see some legal cases about this. There is one in Australia where a politician has threatened to sue because Chatchy B.T. misrepresented something about his career. So we are going to see these things get tested and I'm interested to see how it plays out. Yeah. So the last clip I want to play is about this question of like, well, yes, we're all concerned about AI and we can, you know, sort of agree and disagree about what our biggest concerns are, but I really heard from the senators at this hearing a hunger for ideas, for concrete proposals, for new rules that could help mitigate some of the risks of AI. So this clip is from Senator John Kennedy, who's a Republican from Louisiana. And he's essentially asking the witnesses like, what should we do? Give us some ideas. Please tell me in plain English, two or three reforms, regulations, if Annie, that you would implement if you were queen or king for a day. Restraughtman, here's your shot. Thank you, Senator. Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on dangerous capability evaluations. One example that we've used in the past is looking to see if a model can self replicate and self-exful trade into the wild. We can give you office a long other list of the things that we think are important there. And then third, I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn't in compliance with these stated safety thresholds and these percentages of performance on question X or Y. Can you send me that information? We will do that. Would you be qualified? If we promulgated those rules to administer those rules, I love my current job. Was he asking him if he wants a lead or federal agency? I think so. I think that's a job that this guy is not interested in. Completely. I love everything about that clip. So I wanted to play this clip because this idea of a licensing scheme for AI creators is very controversial in the AI industry. Well, because there's this idea out there and you saw a lot of this from other AI companies reacting to this hearing is this idea that OpenAI by advocating for this licensing law is actually just trying to entrench itself. That the one effect of requiring every person or every company who wants to build a large language model above a certain scale to register for a license is that you don't have as much competition if you're open AI. Because they're going to get the license. But some college student or hacker in his room who's building a large language model is not going to have the lawyers and the compliance departments and the people who needed to secure all the necessary licenses. So there could be a regulatory entrenchment of the big AI players. Do you buy that or is that a concern that some people in Congress have? I don't think it's a concern people in Congress has. I think something like that is a model that they know really well when it comes to like licensing and testing. I mean, it sounds very FDA consumer product, you know, safety, like after those kinds of models. And so it's familiar, but you bring up a super important point. I think that even though Sam Altman in this very same hearing said that he is concerned that there won't be enough competition. This is one of those things that are like loudly spoken to those who understand what the industries like, but those in the public do not see that this is the kind of thing that only a big company like Microsoft or Google can afford to do because they have massive legal departments who can spend the money and put the resources into requesting licensing and making sure that all their products meet safety standards, et cetera. Sure. But just to take the flip side of that, if you're building a super computer that could subject all of humanity, we might want you to have a license for that. We might at least want to know that you're working on that, you know? And so I hope that if such a licensing regime shapes up, it's sort of in that spirit, right? Of like, if you're building like one of the world's most powerful computers, it feels like somebody in the government should know that. There's like security risks associated with it among others. Yeah. So this idea of licensing capture or regulatory capture, I know it's being discussed in circles and Silicon Valley. I got a lot of messages during this hearing and immediately afterwards about it. People are really worried that by going to Washington, Sam Altman is basically trying to sort of convince the government that open AI is the sort of good above board regulatory, compliant AI creator, but basically everyone else is suspect. I think that that's a really important point. And that's the kind of detail that unless you're a pretty astute on the topic, you wouldn't know to ask that. And so that kind of betrays the knowledge gap between Washington and Silicon Valley on that piece. But also, I do want to note that it's, there is a really important point to make about how Washington is so enamored right now with Sam Altman. And because of these things that we talked about, which is that he's making all the trips there. He's having dinners. He's being very open and spending time with people and doing these demos. And it's astounding to me, not surprising, but astounding to me, that in Washington, members of Congress are so easily wooed by this. And this happens all the time. If you're a powerful corporate executive, there's something about that position in Washington that appeals to people. It's just another position of power that's not a Washington thing because you're a powerful person in business. And so I think that we should watch Sam closely and he may not be getting the skepticism he deserves. I was actually surprised listening to the hearing like Cory Booker of New Jersey kept calling him Sam. Like that was weird. I thought that was weird. Like a little too familiar. And that's a little bit Cory Booker-ish. But then, but I do think that was very close. And I made me think instead of like, oh wow, this is so friendly, but it made me think, oh wow, this is such a friendly tenor. That could have some downsides long term. I mean, it's, I do think it is a master stroke on the part of Altman and open AI to run to Congress and say we're building something that could be very dangerous. Please regulate us before we get out of control because it speaks so perfectly to the moment we were just in as you pointed out earlier, Cecilia, where the social media companies didn't do that. And Congress is still so mad at them. And so now you have this young man who comes along and says, we're determined to do it the right way. Go ahead and pass any regulation. And, you know, I don't know if they're this cynical, but I'm certainly cynical enough to say like one reason why you can say that is because nothing might happen, right? Like your whole model for the past five years is they didn't pass a single bill. So if you've gone to them and you've begged for regulation and they don't deliver, who's really the bad guy? That's there is a lot of sophisticated diplomacy that's happening right here. I think you're so spot on, Casey. I like that take. One might call it cynical. You just did. And let's also be clear, Sam Altman is not tapping the brakes at all when it comes to his technology development. So he's saying, look, you know, I'm really concerned. I want to be the voice of sobriety on like this technology. I'm different than the other technologists because I'm not saying everything is perfect, but he is not slowing down. And I think that's something that was lost in the hearing. I think that's right. And not only are they not slowing down, but I think there's a good case to be made that OpenAi really did kick this off first with the launch of Dali, the text image generator, and then with chat GP2, which then of course kicked it into overdrive. But if you're looking for the list of companies that really change the conversation about AI, there's really only one name at the top. Right. So, Cecilia, you've been covering tech and tech policy and regulation for a long time. Long time. And when it comes to this topic, I default to maybe skepticism or maybe even cynicism because we've seen so many hearings. We've heard from so many grandstanding senators about the newfangled technology that they're trying to get their minds wrapped around. They make all of these promises and then nothing happens. There's no laws passed. There's no bills advanced. It is a total exercise in, you know, hot air and futility. So, which was the original title of Hardford, but it can maybe change it. So, after this hearing, are you feeling optimistic or pessimistic or something else about the likelihood that Congress will actually regulate AI in the short term? I don't know if I'm optimistic, but I do think things are a little different. I think that partly things are different because Congress feels ashamed for those very things that you just said, Kevin, that they haven't done anything. And they understand that there's risk for spending so much time and energy in and being so theatrical about the the doomsday of technology when it came to social media and not doing anything. So, they do want to do something, but I think it's going to make regulation is hard. It's controversial. The companies have not weighed in heavily in a negative way. We got a little glimpse of that when Christina Montgomery, the chief privacy and trust officer at IBM, differed in her opinion on what should be done with she differed with some out one and that she said, I actually don't think there should be an independent agency. She said, I think the existing laws are enough, so she was arguing for a light approach to regulation. And so, when I heard her talk, I thought, okay, that's actually what's really going on. What's really going on is that IBM and a bunch of other companies are going to swoop in and say, actually, yeah, regulate us, but in the most light touch way. Which is sort of what happened with social media, right? I mean, Facebook and these other companies, they did ask for regulation. But when it came to the actual bills that were proposed or the agencies that were trying to sort of rein them in, they were lobbying furiously to stop it. Yes. Yeah. So, that could happen again with AI is sort of what I'm hearing. I expected. I expected. And again, I hate to sound so cynical, but yeah, I've been covering this for a while. And I think there'll be a lot of political interest by these members of Congress to be a big voice, allow voice on AI and concerns about AI. So, that's where the political theater comes in. And I think you can expect a lot more of that. Yeah. I mean, one of my feelings after listening to this hearing, and I confess, I only watched about half of it. Oh, Kevin. Kevin, can you please tell me you listened to the last half? No, I listen to the first half. You were name checked. No. They'd be hearing. They mentioned you. They said the New York Times writer who used a chat bot was told to get a divorce. Oh, my God. Well, my congressional debut. There we go. Not what I thought I would be making the floor of Congress for. But what did you think you'd be making the floor of Congress for? I don't know. Just maybe getting a presidential medal of freedom for your journalism. Yes. The jokes on the podcast. They're so good. We must honor this man. I mean, one of my thoughts after listening to part of this hearing was just that I feel there's so much energy and excitement around doing something about AI. But, A, they're not really clear on what the something is and B, they're not actually really clear on what the AI is and how fast it's moving makes actually the regulation of AI a really challenging target. I mean, the rules that you write today are going to be obsolete in two or three years when all of the technology has changed in advance. So I just think it's a really challenging spot for Congress because clearly they want to do something. But it doesn't seem to me like they actually understand what the underlying issues are or like it's even maybe possible to regulate something that is changing so quickly. Am I reading that right or how do you feel about that? I think that's absolutely right. And I think you're seeing a faster uptake on interest in the industry and regulating industry. But the industry is moving faster than any other technology that I've seen. So it's like that's what Congress is up against. And and there are plenty of examples of bad regulations that have been created and regulations that get outdated very quickly. So that's the challenge. The education gap, the knowledge gap between members of Congress and their staff and technologists is still pretty wide. It's getting a little bit better, but it just has to be so turbocharged to catch up with what's happening right now in Silicon Valley. And finally, like this is the case for an agency, right? An agency is set up in a way that it can respond faster to things. I think the senator Michael Bennett from Colorado said recently that you wouldn't want Congress to have to pass a law to approve every new drug, right? So instead, we have the FDA. And he has a bill coming out that is has some AI related stuff in it. But it is pro agency for this reason that you want an agency that can just kind of subject experts, subject experts, making decisions a little bit more on the fly, not needing a literal act of Congress to do anything as a stuff of all. But I do think there is a case that Congress should understand this technology and know at least the basics. So to see, if you would just be willing to tell all of your sources on Capitol Hill to listen to the hard fork podcast, but they already are. So Celia, thank you so much for coming on. Thank you so much. Fun. Thank you guys. So Casey, you had a very interesting experience recently of going on a different podcast than the hard fork. I did. And not only a different podcast, but I would say one of the greatest podcasts of all time. Yeah, it was a little like, you know, you're hanging out on the lot at like Warner Brothers. And like Martin Scorsese taps you on the shoulder and is like, Hey, you want to be in a movie? I feel like that's what happened to you. Yeah. So we're talking, of course, about this American life, which legitimately has been my favorite podcast since I was in college and recently I had the opportunity to work with them on a story that touches on a lot of the themes that we talked about here on hard for it. So this episode, I listened to it in the car after it came out. It was very, very fun and entertaining and informative. And today on the show, we just decided we're going to play it for you because it is a true labor of love. And I think it turned out really well. And I wanted all of our listeners to be able to hear it. So let's listen to the story. And then afterwards, let's catch up and let's sort of pull it into the present and talk about what's happening at Twitter now and kind of where this whole field may be headed. There's been one person in particular at Twitter, the case he's been wanting to talk to, a very senior employee at the company who, while just doing his job, ended up having to take on two of the most powerful people on the internet and in the world. Those two people, Iran Musk, and the former president of the United States, Donald Trump. Case he wanted to hear all about that. And also what it was like for the guy, what he was thinking, what he was doing, once Iran took over and the place started taking on water. Here's Casey Newton. Yoel Roth did a lot of jobs at Twitter over the years, but it was always the same kind of job. He was in the content moderation business. One of those people who decides which of your posts can stay up on the internet and which ones need to come down. And he got his first glimpse at what life as a content moderator would be like while he was in college on a date. He's gay. So am I. I went out for drinks with somebody without knowing where he worked and he volunteered that he actually worked for the parent company of the website manhunt, which was one of the kind of early gay websites that was very specifically sexually focused. And even in these early days of the web, there was already a team of people who were deciding what you could and couldn't post there. They had a set of kind of convoluted rules about what types of nudity were allowed to show in which places. So nudity fine, but not all nudity. So there were specifics. And he described to me a system of color coding images of red, yellow, green, and then a team of people who were responsible for making those designations. And I'll never forget. He said, the people doing these reviews are almost entirely straight women. And I was just floored in that moment of thinking, God, there's a team of heterosexual women who have to look at the depraved things that gay men are posting on the internet. I'm so sorry. And right, the senior whole pick specialist at manhunt was some poor woman. That's not an exaggeration. Yeah. We hope she's doing okay. Are you out there? Go into the severe good life. I'm so sorry. I'm sorry for what you saw. After the date, you all had one thought. I was like, aha, that's my dissertation topic. You all was in grad school. You got this PhD. And soon after, a job at Twitter, they gave a small desk. This was 2015. The office's most striking feature was probably a giant life size cardboard cut out of Justin Bieber sat directly behind my desk. Justin Bieber obviously being a major figure in a early Twitter. Maybe the most popular user, at least for some period of time. Yes. There were rumors that Twitter had entire servers just dedicated to serving Justin Bieber related traffic. Besides Bieber, what Twitter was really known for back then was its trolls. The site was plagued by users harassing other users, particularly women. That year, I co-reported the story about how the site's then CEO Dick Costello wrote a memo saying, quote, we suck at dealing with abuse controls on the platform and we've sucked it for years. That was the backdrop for UL's new job. As an intern at Twitter the previous year, he spent part of his time moderating content. He'd seen this video of a dog getting abused. He removed it from the site, but for years at Hanodem. It was never even like the specific image. I couldn't, I couldn't tell you what the dog looked like or what the video was. I just remember its existence and I remember that feeling of seeing it. And then of clicking like I think the button said no. More than anyone ever talks about, it's this mostly invisible job of content moderation that makes Twitter usable for the average person. It's what makes every forum on the internet usable at all. And Yoel was good at the job. He got promotion after promotion in his department. What Twitter and a lot of other tech companies now call trust and safety. It's a hard job and it just kept getting more complicated. The way you L tells it, there was a wild new case to examine almost every day. Foreign governments impersonating their enemies, real people organizing harassment campaigns, impossible debates over what should count as hate speech, and regular meetings over whether to put labels on tweets that didn't quite violate the company's rules, but would benefit for more context like about COVID. In 2020, the biggest case yet landed on Yoel's desk. It was a case about a user who kept causing problems. And this guy's fans were even more rabid than Justin Bieber's. It was the president of the United States. Donald Trump. This is a couple months into the pandemic. Trump had tweeted that mail and ballots in that year's election were going to lead to widespread fraud. And just to lay my own cards on the table, I thought that was really bad because they won't lead to widespread fraud. Anyway, Twitter's policies prohibited misleading people about the voting process the way Trump was doing. But the company had never taken action against the president's tweets before. Yoel had to decide what to do. I didn't see a basis for changing the policy, modifying it, winking at it, squinting and finding a violent, like there was no way around it. It was clearly a violation of our policy. Truthfully, there was a lot of nervousness about crossing this line for the first time taking action on a tweet from the president of the United States. The company decided that instead of removing the president's post, it would put a label under it. A label that just said, get the facts about mail and ballots with a link to a page that pushed back on Trump's claims. At a certain point when it became clear that yes, this was going to happen, it became a question of who could push the button. At some level, we probably understand that in a moment like this, someone has to take a physical action to type the words, get the facts about mail and ballots and click the button to attach the label to the post. I've talked to dozens of content moderators over the years, but I've never talked to someone who had moderated the president of the United States. When it came time to take action, only a handful of people at Twitter had the power to do it. The company had locked down access after an incident where a former contractor on his last day working there briefly deactivated Trump's account. Shout out to Body Our Dui Sack, who says it was an accident. Also, Twitter had just introduced this idea of putting labels on misinformation a couple of weeks before. It was this perfect storm where it required elevated access and knowledge of this incredibly convoluted system for applying these labels. I was the only one who knew how to do it. I got an instruction from my boss that said, all right, we're going to do this. Also, because this is how life goes, Yo-L and his husband were moving houses the day all this happened. I excused myself from wrangling the dog and the movers and the relocation of stuff and sat in the front seat of the car with my cell phone tethered to my work laptop. I was on a video call with some of the other leaders at the company who were making this decision. I remember a countdown where I was going to push the button that would apply the label to this tweet at that same moment. Twitter's communication staff was going to announce the decision. It felt very important in that moment for the timing to be exactly joined up for some reason. We counted down, I clicked the button and then I refreshed the public view of the tweet and saw the label. The communications team said we've got it from here and I said, okay, I have to go back and deal with the movers now and I hung up the call and I closed my laptop and I crossed the street back into my apartment. If they made a movie about Trump and Twitter, you can imagine how they'd shoot this scene with the Twitter employees hunched over a console in a control room, high-fiving. But in reality, of course, it's the opposite. Most content moderators try really hard not to bring their own political beliefs into the job. In a way, the legitimacy of the whole company they work for depends on it. Shortly before that Trump tweet, Twitter had explained its reasoning for adding labels to misleading information with a blog post. Importantly, the post was signed with Yoel's name. It soon after that first label showed up on Trump's tweet, his name was everywhere. I wake up one morning the third day that my husband and I are in our new home to my phone exploding because Kellyanne Conway has just talked about me on Fox News and has said that I'm responsible for the censorship of the president's account and I'm responsible for censorship at Twitter more generally. And in that moment, everything exploded. Thank you very much. We're here today to defend free speech from one of the greatest dangers. The president held up a copy of the New York Post with me on it in the Oval Office as he announced an executive order restricting censorship by Silicon Valley companies. His name is Yoel Roth and he's the one that said that mail-in, balloting, you look mail-in, no fraud, no fraud, really. And for weeks, discussion of me and my political opinions and my beliefs became a symbol of everything that was allegedly wrong with Silicon Valley and with the decisions that companies have made. Twitter had the higher security to protect Yoel and his husband and it all taken him by surprise. He'd expected the criticism but not that he would be the target. In cases like this, people would usually come after the CEO or the company itself but soon you will realize that what his harassers were doing was much more effective. If you make companies believe that their employees could be hurt for enforcing the rules, they might be more elected to enforce them. Twitter didn't stop though. They kept putting labels on his tweets and Trump of course lost the election though that's probably not how he would describe what happened. And after the January 6th attacks of the Capitol, he lost his Twitter account too. Yoel did not press the button on that one but here's a detail about that day that I love. Yeah there was there was a technical question about whether it would work or whether Twitter would crash. Can you actually ban Donald Trump's account or is he banning somebody with that many followers is actually technically very complicated. When you suspend somebody, Twitter systems have to figure out what to do with all of the people who followed them. In other words, if you follow Trump, Twitter has to remove him from your list of followers which sounds very straightforward but when you have to do that tens of millions of times immediately, we had to think about like if we push this button is the site going to go down. As it turned out, the slight stayed up and Trump was banned for a while anyway. It was such a strange moment. With the click of a mouse, Twitter had managed to do something that Congress attempted twice and failed to punish Donald Trump in a way that had real and immediate consequences for him. Trump headed off tomorrow, Lago. Yoel got promoted. He was running the whole department and that's one another Matthew Rich guy started to complain about all the rules on Twitter. The guy was Elon Musk. In April 2022, Musk announced he'd acquired a big stake in the company. A few days after that, he announced his intention to buy an outright. As soon as the news broke, Yoel's employees started asking what it meant for them. Elon had been tweeting a lot about free speech and is feeling that Twitter didn't have enough of it. He posted a photo of six people in dark robes with the caption, Shadowband Council reviewing tweet, and truth social exists because Twitter censored free speech. Also stuff like, next I'm going to buy Coca-Cola and put the cocaine back in and let's make Twitter maximum fun. Some employees working in trust and safety worried that maximum fun might mean Elon would dismantle their whole operation. Yoel was willing to give him a chance though. What I told them and what I sincerely believed was it's too soon to tell. People are frequently caricatured and villainized in the media, certainly I was, and that's not a reflection of who they actually are and so don't prejudge. At the same time, Yoel knew that his more concerned employees might be right, that he was aboard a ship that might be about to sink. He knew he needed to be alert for the signs. His solution was to make a list, to write down the red lines that he would not cross no matter what. Most days his job was to enforce other people's rules, but with Elon coming in, he wanted to write down some rules for himself. You have to have written policies and procedures so that when the moment comes to make that decision, you just follow the procedure that you had laid out before. Your whole job was about trying to not make decisions out of impulse and emotion, but by following a playbook. That meant that before Elon took over, you actually had to give yourself a playbook. That's right. And so on a no-pad by his desk at his house, he wrote down his red lines, I will not break the law. I will not lie for him. I will not undermine the integrity of an election. By the way, if you ever find yourself making a list like this, your job is insane. Then you'll well wrote down one more rule. This was like a big one. I will not take arbitrary or unilateral content moderation action. So if Elon came up to you and said, bad this person, you're going to do that. That was the limit. Did people on your team show you the list that they were making too? Or talked to you about them? We did. You well's list of rules got its first test pretty quickly. On the date Elon officially took over Twitter. It was the end of October. Lawyers were finalizing paperwork and Twitter's staff was attempting to enjoy the annual company Halloween party. The scene was surreal. Were you there for the Halloween party? I was. Were you dressed up? I was not. Lots of people did dress up though. Employees brought their kids. There were balloons and face painting. I've talked to so many people who went to this party and every one of them has added some bizarre new detail. Some people saw a guy dressed as a scarecrow walking around with what appeared to be a handler. They wondered if it was musk. It turned out to be a hired performer. As the Halloween party had started, I was sitting in a conference room doing some work and we start hearing rumors that not only has the deal closed, but also the company's executives have been fired. At first, it's unconfirmed. I get texts from a couple of reporters who asked me, is it true that Vigia has been fired? I said, no. I just saw her. She's still online and the company's slack and Gmail. Of course not. Your sources are lying to you. And then it was true. Such an important lesson. Always trust the reporters. Pretty soon afterward, you'll get summoned over to the part of headquarters where Elon and his team had set up shop. He was nervous. And I thought, okay, I'm about to be fired. So I walk past a number of my employees and I don't let on that any of this is happening because I don't want to panic them because they're there with their kids. And so I smile and make jokes about Halloween costumes and walk over to this other part of the office where somebody who I gather works for Elon Musk in some capacity, but they don't introduce themselves. They just say, how do I get access to Twitter's internal content moderation systems? And I kind of pause and blame can say, you don't. That's not going to happen. I explain that Twitter is operating under an FTC consent decree that access to internal systems is regarded as highly sensitive and that there are both legal and policy reasons why we simply couldn't grant access to somebody. Elon's aid explains that they're worried about an insider threat. Someone who might try to sabotage the site on their way out. You all tell them, sure, I can help with that. But he explains some steps they can take to protect the company and to you all surprise, the aid says, okay, you're going to tell that to Elon and then he leaves and comes back with Elon Musk. Who at this point I've like seen on the internet, but I had not met in person. And so Elon sits down and asks, well, let me see our tools, our tools. He owns the company at this point. And so I show him his own account in Twitter's set of enforcement tools and I explain to him what the basic capabilities are. And then I make a recommendation to him of what I think Twitter should do to prevent insider misuse of tools during the corporate transition. You will also have recommendations about the midterms and the upcoming presidential election in Brazil. And as I start to explain some of the rationale related to the Brazilian election, Elon interrupts me and says, yes, Brazil, Bolsonaro, and Lula, very dangerous, we need to protect that. And I was floored. I came into that conversation expecting him to fire me. And instead, he jumps ahead of me to say that he is sensitive to the risks of offline violence in the context of the Brazilian election and wants to make sure that we don't interrupt Twitter's content moderation capabilities. It was like a dream come true. You're thinking, maybe I'm actually aligned with this person. Yes. And so, you all stayed. He was surprised in a good way. On Twitter, Elon talked about the company as if it should barely have any rules at all. But in that moment, one on one, you all thought he might turn out to be more reasonable. Maybe spending some time inside the company would show Elon the real value of those rules, which is that without them, you lose your users. And you lose your advertisers. And you all felt like Elon could be sensible. One of his first requests was to restore the account of the Babylon B, a right wing satire site. But you well explained how it had broken Twitter's rules and Elon backed off. I found him to be funny. I found him to be reasonable. I found that he responded well to having evidence-backed recommendations be put in front of him. And I, for a moment, felt that it might be possible for Twitter's trust and safety work to not just continue, but also to get better. After that, things began to move really quickly. About a week later, Elon laid off half the staff. Suddenly, Yoel was one of the highest ranking employees from the old Twitter who was still working at the new one. And it seemed like Elon liked him. After some trolls went after Yoel for some of his old tweets, Elon tweeted that he supported him. The US midterm elections took place, mostly without incident. Same for the election in Brazil. Elon kept pushing his teams to move faster, even as he was laying them off. At first, Yoel said that Twitter still had enough content moderators to keep the site safe. But the cuts kept coming, and the work got harder and harder. Soon, Elon unveiled his first big idea for making lots of money and recouping the $44 billion he had spent to buy the company. Yoel and his team thought it was insane. The plan? To let anyone get a blue verified badge for their profile for $8 a month, the company called it Twitter blue. The risks seemed obvious. People would just make new accounts to impersonate brands and politicians and other celebrities. Yoel and his team wrote a seven-page document outlining the risks. But the baddest one on sale anyway. And almost immediately, impersonators started buying them and wreaking havoc. In maybe the most famous case, someone impersonated the drug maker Eli Lilly and said that insulin would now be free. The real Eli Lilly stock price dropped more than 5%. It was a vivid illustration of why companies like Twitter make rules in the first place. And impersonators were suddenly all over the site. And so, okay, we have to ban them. But somebody has to review them. We can't just ban everyone. And so, you do that with content moderators. And we had instructions to fire more of our contract content moderation staff to cut costs. All this seems really self-evident to me, and I think it would have seemed self-evident even before you launched this. What was Elon's take on this? How did he respond to you raising these concerns? Do it anyway. And that was a breaking point for me. We reached out to Twitter for comment, but didn't hear back. Reporters have sometimes gotten automated poop emojis, but I didn't even get that. You well had spent a long time gaming out scenarios for what might make him leave Twitter. He made that whole list. He wouldn't break the law for Elon. He wouldn't undermine an election. But ultimately what got to him was something he didn't foresee. It wasn't on the list. It was something more personal. He knew this bizarre plan wouldn't just make people lose trust in Twitter. They would lose trust in him. Behind Elon Musk, I was the most prominent representative of the company period. And I became aware that when Twitter blew turned into the predictable hot mess that it was, that people would ask why didn't the trust and safety team see this coming? Yo-L, why are you so bad at your job? The day after the launch, Yo-L and Elon got on the phone. Elon thought the problem could be fixed if Apple would just hand over all the credit card information of the people doing the impersonations. Yo-L had to explain that Apple would never do that. He also asked Elon to slow down the rollout of blue so that they would have time to hire and train more content moderators to look for impersonators. Elon didn't understand why that would take longer than a day. I got off that phone call and thought, I can't solve this problem. I will spend the rest of my time at this company trying to bail out a ship that might sink more slowly because I'm there bailing it out, but I don't want to spend the rest of my life bailing out a sinking ship. Yo-L had made up his mind to leave. He called a couple of his employees to let them know. I knew that that day I did not want to be walked out of Twitter after almost eight years by corporate security. I wanted to leave on my own terms. There was an all hands going on at the time. Elon's first time addressing the company in person. During that all hands meeting, I hit send on my resignation email, put my laptop in my bag and walked out of the building for the last time. Did you purposefully send it when you knew who was on stage? Yes, absolutely. I knew that it would take some time for the HR team to see it and process it for that to get to him, for him to react to it. In that time, I knew I wanted to be back at home and not be in the office. Was it a long email? It was one sentence. I am no longer able to perform the responsibilities of my job and resign it as of today at 5 p.m. I remember feeling two things. On one hand, I felt relieved. And then I also just felt deeply sad. I just wanted to get home. So I left Twitter's garage and was driving. I was about halfway across the bay bridge when I think Zoe broke the news that I left Twitter and my phone exploded. And I get what you didn't even get across the bridge before Zoe broke the news. God, I love her. Zoe is my coworker. I'm immensely proud of her. Even if she did kind of mess up UL's plans. The car UL was driving that day was a Tesla, by the way. He was leasing it. He had been trying to return it but couldn't get anyone to respond to him. Maybe that all had been drafted to work at Twitter. UL lay low for a few days. He spent some time writing and published an op-ed in the New York Times. It explained in a very dry and principled way why he left. That's when some Rando account reshared something UL had tweeted from 2010 about relationships between adults and minors. Around that time he'd been working on his dissertation, which called for tech companies to do more to protect minors at gay hook ups sites like Grindr. But Elon replied with a tweet, quote, this explains a lot. Then he linked to UL's dissertation, quote, quote, looks like UL is arguing in favor of children being able to access adult internet services in his PhD thesis. Not true, but UL's phone exploded with abusive messages. It made the backlash to labeling a Trump tweet look minor by comparison. Hundreds of messages per hour. Homephobic, anti-Semitic, and also violent. Just deeply endlessly violent. And he only had to tweet once. He didn't even have to say directly UL as a pedophile. He just had to wink and nod in that direction and people took his lead. When UL had first used the internet, it felt like a small self-contained space, separate from what we used to call real life. But by the time UL quit Twitter, the distinction between online and off had collapsed. And it had collapsed in large part because of the company he worked at. Twitter. The site brought together so many of the world's most influential people and then pitted them against each other and he's all consuming daily battles. And the anger coming out of that could drive people to do things. Violent things. Pretty soon, UL and his husband were overwhelmed with death threats. My husband turned to me one day and said, I've seen you through a lot of being targeted and being harassed. I've never seen you look scared before. And that was the moment that we decided to leave our home. And so once again, they moved. I met with UL at the temporary house that he and his husband are staying at while they look for a new place. After all of this, I thought you might want a different kind of job. I would have wanted a different kind of job. The internet had almost killed him, threatened to anyway. But still, somehow, he's optimistic about what the internet could be. In a way you almost never hear anymore, I love the internet. I really do. I think the internet's power to bring people together and help folks all over the world find connections that matter to them is magical and is one of humanity's greatest achievements. I also think the internet can be incredibly dangerous and scary. And the work of trust and safety is trying to push that back a little bit. And to make the internet more of what it can be and less of the dangers of what it could turn into. UL's idealism about the internet feels radical, given how destabilizing its bend, how destabilizing Twitter has been. But I know what he means. Back when he was a teenager, the internet gave UL a place to discover other gay people, the chance to talk to everyone in the world instantaneously. It gave him a career. It gave me all those things too. I remember life before the internet. It was a less frantic time, but it was also a lonelyer one. Here's how Twitter's doing since UL left. Hate speech is on the rise. Advertisers have fled. Banks that funded musks takeover have marked their investments down by more than half. Musk himself is warned repeatedly that the site microbank rupt. I kind of hope it does. Because what's happening at Twitter right now is teaching us a lesson. It's taken us way too long to learn. The people like UL, they're not the enemies of free speech online. They're the ones who make it possible. If you get any value out of social media at all, it's in part because of them. They clean the place up. Make it feel good to be there. They pull us back when we go too far. And they do censor us. And like, of course we hate them for it. We convince ourselves we do a much better job if it were us. That's what heal I thought. Look what happened. Nobody likes the guy enforcing the rules. But watching Twitter sink into the ocean. And you can't help but notice how much you miss that guy when he's gone. We'll be right back. So Casey, a congratulations on this story. Thank you. Loved it as much the second time as I did the first. And listen to it again. We are recording this before they insert that story. You know that Kevin Ruslide you don't know that air. I was up last night in my hotel room listening to your this American life episode. Well, thank you then. I wasn't actually. Okay. I was watching Diner's Drive-Ins and Dive. See, I knew it. I knew it. Look, Guy Fieri's hard to compete with. You've got the hair kind of. I do. It's Fieri-esque. So, Casey, let's bring this story into the present. Have you talked to Yo-L Roth since the story ran and what's he up to now? Well, we have a message, a bunch, and he is doing some stuff in the academic realm. So, Yo-L is currently a technology policy fellow at the University of California at Berkeley and is a non-resident scholar at the Carnegie Endowment for International Peace. So, I think it's safe to say his interests are still very much focused on trust and safety and he's going to continue to be a player in that world. And let's also update our Twitter conversation because a lot has been happening at Twitter since you started working on this story. So, Twitter announced a new CEO, Linda Yaccarino, who is previously the advertising chief at NBC Universal. Elon Musk announced her appointment and also announced that he will be the CTO, this chief technology officer of Twitter going forward. What did you make of this announcement? Well, I think the most important thing to remember about this is that the title now held by Linda Yaccarino was previously held by Elon Musk's dog, Floki. During a recent interview, when an interviewer was trying to press Elon on being the CEO, he said, oh, I'm not CEO, my dog is CEO and nothing gets past him. Well, those are big shoes to fill because Floki was an operational genius. Yeah. Well, I think the company's actual revenues might disagree with that statement. But, you know, the point is, when Elon came in and said, I've hired a new CEO, I was just very skeptical for a few reasons. One is, like, this is not a job that he has previously placed a lot of importance on. Two, he said he wants to continue to manage the product. And so, what he's essentially done is bringing someone to run the ad business, which he spent the past six months undermining. Right. That was the confusing part to me, is he has said before that he hates advertising, that he doesn't want Twitter to be an ad-based business, that he wants to pivot to subscriptions, and other revenue streams. And then he chooses, as his new CEO, someone who is steeped in the world of advertising. What did you make of that? Well, I think that tells you how well Elon Musk's subscription business is going for him. Right. If that thing were taking off, I don't think that he would feel the need to bring in an ad's chief, but it's not working. And so, he's turning back to that. And, you know, I like to remind people that before he took over, Twitter was a $5 billion business and the vast majority of that was advertising, some significant percentage. We don't know exactly how much of that has now gone away. And so, now he's going to try to build it back. But, wow, those relationships are going to be super tough to repair, I think. And what do we know about Linda Yaccarino? Well, she was a long time ad's chief at NBC Universal, and is well known and really well liked by advertisers. You know, interestingly, she spent a lot of time in her previous job telling them that social media was not a safe place to advertise, and if you really wanted to be safe for your brand, you should advertise on TV. So, she will now presumably be singing a different tune at Elon Musk's Twitter. Right. And do we know anything more about how Twitter's approach to content moderation is changing or may change in response to this sort of collapse of its advertising business? Well, I don't know what their plans for content moderation are going forward, but a lot of people notice recently that they were doing very innocuous searches and they started to see videos of animal abuse and cruelty, which, you know, unfortunately on any social platform, bad people will just upload that, you know, anywhere, and you have to put systems in place to catch it. The fact that Twitter either didn't have them or those systems started to break raises a lot of questions and people's mind about how seriously they're taking this. So, you know, Linda Yaccarino is really going to have her work cut out for her, I think, in making that platform safer advertisers. Got it. Thank you. It's my pleasure. Please never leave me fire glass again. Heart Fork is produced by Rachel Cohn in Davis Land. We're edited by Jen Poyant. This episode was fact-checked by Caitlin Love, who I met in person for the first time today, lovely. The Queen of Facts. Yeah, she did not fact-check me during our conversation. Well, well, she got it right. Today's show was engineered by Alyssa Moxley, original music by Dan Powell, Alicia Bet YouTube, Marion Luzano, and Rowan Nemisto. Special thanks to Paula Schumann, Ira Glass, David Kestonbaum, Christopher Sotala, Nuggo Logley, Puywing Tam, Kate Lapresty, Jeffer Miranda, Prince Harry, Barack Obama, I just thought I should keep listing famous, but I... Ira Glass and the credits really got me feelings on my way. LeBron James, Beyonce. All of whom helped with this week's episode. You can email us at hardfork at anywheretime.com You can email us at hardfork at anywheretime.com