Susan Wojcicki on the Road Ahead for YouTube
Yesterday, at the World Economic Forum’s Global Technology Governance Summit, I had the opportunity to speak to YouTube CEO Susan Wojcicki. In her first interview following YouTube’s announcement of an accountability metric called Violative View Rate (VVR), Wojcicki spoke about countering misinformation, bettering regulation, and what’s next for YouTube. Below is a lightly edited transcript. You can see the full video here.
Nick Thompson: Hello, I'm Nicholas Thompson. I'm the CEO of The Atlantic. It is my great pleasure to be here with Susan Wojcicki. She is the CEO of YouTube. We're going to be talking about YouTube's crazy last year, about global governance, about disinformation. We're gonna be talking about how Susan spent much of the pandemic with five children at home, which is as heroic as running YouTube at this moment. So hello, Susan, how are you doing?
Susan Wojcicki: Hello, how are you? Thank you.
NT: I am doing well.
SW: Good to hear. Well, first of all, I just want to say thank you so much for having me and thank you to WEF for hosting this event. And thank you to the government of Japan for hosting these Global Technology Summit conversations. I appreciate it so much. So I just want to say thanks to everyone for making it happen.
NT: It is great that we have we get to do this even at this crazy time. So let's jump in. I just want to ask you a little bit about YouTube in the past year, because we've all been locked at home basically watching YouTube. Like, we started watching videos on how to make hand sanitizer and then videos of how to do arts and crafts so we didn't go crazy. Tell me the most surprising thing that you've learned about how people watched YouTube during the pandemic.
SW: Well, first of all, I'll just say I never thought that we would have so many hand washing videos. It was featured on the Google homepage. That's something I really could never have predicted. But I mean, I felt it was a huge responsibility for us, with the pandemic, to be able to be—I felt like we were such an important link for people to all kinds of information, whether people were at home and they needed to connect to religious organizations or social groups or, you know, we saw musicians who came out and did big concerts. We saw bands come and post historic coverage of concerts. It was just an important way for people to connect and learn. And, you know, one of the things probably that surprised me the most, which was really your question there, Nick, was how how important we became in distributing COVID-19 information. And we immediately saw the role that we played. And we had everyone working at full capacity. So we served hundreds of billions of impressions of COVID information that came from different health organizations. And we also made sure that we had playlists. We had to implement a whole bunch of new policies, but we really saw the critical role that we played in health. And working with health organizations—we worked with over 85 different health organizations. And it was really the first time that we worked so closely in the health field with so many different organizations for something that was global in nature.
NT: That's very interesting and in fact, it leads right into the news from today. So as some people who are watching may know, YouTube made an announcement maybe three or four hours ago about violative content, basically measuring the amount of content that violates YouTube's standards. And one of the standards you can violate is misinformation about COVID-19. So the question I want to ask about that is: this new report is out. There's transparency. That's wonderful. The amount of content that people view that violates your policy is quite low. It's 17 views out of ten thousand, is that correct?
SW: Yes, it's it's approximately somewhere between 16 to 18 for ten thousand views.
NT: So that means with my children, we probably see 16 to 18 a day. But the question I want to ask is: of the different contents of categories that you screen for where you have policies that could be violated—hate speech, nudity, terrorism—what remains the hardest category to identify? They're all difficult in different ways. You've used machine learning to knock the numbers down. What is the one that machine learning still has the hardest time with?
SW: Well, first of all, I think it was a really important milestone in terms of what we announced today, because we have been asked many, many time—by governments, by press, by the advertising creator community—about this violative rate. And we were able to show exactly how good we are at enforcement of our policies. So we were able to show that we have a very high ability to find this content and show exactly what that number was. We were also able to show that we were able to reduce it significantly over time. So if you look at where we were in 2017 at the same time of the year, we've reduced this by more than 70 percent. And that is due to an incredible amount of hard work with machines and also improving our policy. So not only did we significantly remove content that violates our policies at that significant rate, but we also created a lot more policies that we had to remove.
And I mean, I would say the machines are good. We can find content across the board, but something like hate speech or something that has a lot of context would be something that would be harder, from a machine standpoint, to be able to detect. But in the end, we've been able to really fine-tune our machines so that we can find a lot of this content. And it is flagged, but it doesn't necessarily mean that it's removed. So what happens is the machines will flag it and then it will be sent to human reviewers who will determine whether or not this is, in fact, violative or not.
NT: And I think that is one of the one of the complexities here is that this is content that violates your policies. It's not content that violates my policies or that fits some government's definition of hate speech. So a critique that someone could make is: this is just what you think is bad content, it has nothing to do what I think is bad content. How do you respond to that?
SW: I'd say they're two different conversations. So the first one is for you and I and governments and everyone else. Everyone seems to have an opinion about this, about what is the good conduct, what's the bad content, what should be up, what should be down. So we engage with many different groups across many different topics. And I'd say that's one conversation. But then we post very clearly and we say, this is the content that we have decided is violative of our platform. We post it on our community guidelines. And then that's a different question, which is: well, how good a job do you do at removing that content once you've identified it? And so this report that just came out showed exactly where we are, which is a 99.85 percent. We have a little confidence interval, which is why we have this 16 to 18 range and ten thousand views. So our goal is to break that into two different conversations. First, what the policy should be, and then, do we do a good job enforcing them once we have those policies?
NT: Right. That makes a lot of sense. Let's shift to the question of good content, right? So sometimes there's all kinds of interesting content that you regulate in different ways, right. There's bad content, which you can try to get rid of. There's borderline content, which is stuff that doesn't violate your policies and you try to downrank that. And then there's the content that we sort of want to see. And there's also sort of this fourth category of content we're really happy we saw. And so if I spent an hour on YouTube and I surf through YouTube and I follow the recommendation algorithm and I watch a lot of sport videos and maybe I see the late night comics at the end of the hour, I go, hm, that's fine. If at the end of the hour I can solve a Rubik's Cube because the YouTube algorithm has pushed me in super interesting directions and figured out that I've always wanted to solve a Rubik's cube, then I'm thrilled. How do you think about incentivizing not just kind of run of the mill sugar, not bad, but the really good stuff. Like, what's the exact inverse of the violent content?
SW: Look, I think one of the things I've learned working in information since I've been at Google for over 20 years is the broad range of interests that people have. Information is incredibly diverse. And what a lot of people love about YouTube is they can say, I went and I found this specific video that I used to watch when I lived in this foreign country far away 40 years ago, and I found it on YouTube. Or: I had to fix something that was very specific in my house and I could do that on YouTube.
So first of all, it's really hard to say: this is a content that is really great. I think you started talking about educational content. You implied with the Rubik's Cube that that educational content had this higher premium. I'd say educational content is incredibly important to YouTube and that almost everyone comes to YouTube to learn something. In fact, we just had this Ipsos study that said over 77 percent of people said they came to YouTube to learn something. And just anecdotally, everyone tells me how they fix something in their house. But I think what you're bringing up is one of these challenges, is what is considered good content.
We do classify, when it comes to information, authoritative content. So if you're looking for COVID information, we actually can say, look, you know, the health organization, your local health authority, the CDC or whatever country you're in or the World Health Organization—those are organizations that we can trust as opposed to some channel that just showed up that we don't have any kind of authoritative information about. So we definitely have a concept with information about authoritative sources. And we make sure that when people are looking for information that is sensitive, we show those authoritative sources. But if you're in the entertainment area or you're looking at how to fix something or how to learn something or an obscure topic, it's really hard to put some judgment about what is the best content that's out there.
NT: Right, so on authoritative content that makes sense, because you can just label and if something is labeled as authoritative, you can boost it.
SW: Well, not just label it. I mean, we have a more sophisticated algorithm in terms of how we identify that and we also are working on a global component. But we definitely do raise it. And we've done a lot of work in the last year to identify sensitive areas and make sure that we're raising authoritative sources on that information.
NT: That makes sense. And by the way, when Susan speaks about fixing things, I was listening to a podcast of hers this morning where she talked about watching a YouTube video to learn how to fix your 3D printer, I believe, which I thought was a wonderful anecdote. Anyway, I thought that was a delightful story about how one can use YouTube.
The last time we spoke in public, which was at South by Southwest, you introduced a new concept very much related to this, which is that when conspiracy information is shown, you would show a panel with authoritative information and you would lead people to Wikipedia. Tell me how this has evolved over the last two years.
SW: Sure. So, yes, we met almost two and a half years ago. 2018.
NT: Oh my god, three years ago.
SW: Yeah, three years ago. And I told you that we were going to label content. And at the time that was a new idea. I don't think we had actually rolled it out, or we were just in the process of rolling it out. And that became something that we now refer to as information panels. And those information panels have become incredibly important. I'd say they're a serious workforce in making sure that people have the right information and that we can use to counter misinformation. So I'll give you a few examples. We certainly have used that on all the coronavirus information. So if you look at something COVID related, we'll link to all different kind of health information, health authorities, depending upon what country you're in. All kinds of common conspiracies, we'll link that. For election integrity, we use that. In COVID, we served hundreds of billions of impressions that were different information panels. So we've come a long way, Nick, since we first met. And I'm hopeful that our VVR is also the start to something really important, which is more transparency and just an enhancement of our transparency report to understand how we think about violative views.
NT: I wish we'd been able to announce the transparency report on this.
SW: I'll do better next time. I'll work on that.
NT: Thank you so much. It's very important to be able to bring these things live at the forum. So one of the things that to me is most interesting about the algorithm, and we wrote a long piece at WIRED—in my previous job, which was the editor of WIRED—about the way the algorithm had evolved. And one of the things that's so interesting is that every time you make a change to solve one problem, there's some kind of an unintended consequence. There's something that you then have to catch up to. There's some way that behavior has changed, there's some new thing that is incentivized. Tell me how you think about the evolution of the algorithm right now, where it is right now. What are the key things you're prioritizing and trying to fix and what are the things you're worried about?
SW: Sure. I mean, I think we've come a long way in our algorithm. I mean, ultimately, we want to give information and suggest videos to our users that we think they're going to enjoy and want to see and are related to their interests. But there's a lot of caveats to that, too. So, first of all, as I mentioned, when we deal with information, we want to make sure that the sources that we're recommending are authoritative news, medical science, et cetera. And we also have created a category of more borderline content where sometimes we'll see people looking at content that's lower quality and borderline. And so we want to be careful about not over-recommending that. So that's a content that stays on the platform but is not something that we're going to recommend. And so our algorithms have definitely evolved in terms of handling all these different content types.
I'd say the plus of that is that our users are able to see higher quality content. We're able to make sure that they're getting information from sources that are very reliable. But I would say the con of potentially some of these changes—because, as you pointed out, every change has some downside—is it may be harder, in some cases, for channels maybe who are getting started or smaller to be able to be visible when there is a major event or when people are looking at something that is science or or news related. But I would say that that's a trade off that we've made because we've realized that it's really, really important.
So, we learned this lesson the hard way. So when we had the Las Vegas shooting, unfortunately, there were a lot of people who were uploading content that was not factual, that was not correct. And it's much easier to just make up content and post it from your basement than it is to actually go to the site and to be able to report and have high quality journalistic reporting. And so that was just an example of what happens if you don't have that kind of ranking. So, sure, we want to enable citizen journalism and other people to be able to report and other people to be able to share information and new channels. But when we're dealing with a sensitive topic, we have to have that information coming from authoritative sources so that the right and accurate information is viewed by our users first.
NT: And that's that's not an easy tradeoff. I mean, your name is YouTube. The whole principle is that you, anyone, can have complete free speech and publish that they want, or that was the founding principle. I would imagine that this is a tradeoff that did not come lightly.
SW: I lost you on the last second there, you broke up a little bit, but you're right, we came from YouTube. And when YouTube first started, it was much more entertainment. It was much more focused on creating interesting things that you saw, funny videos. Music has always been really big on YouTube and you definitely want to be able to break the latest artists. And so that's something that we need to think about. We have so many artists who got started on YouTube, so when we have our next, I don't know, some kind of some famous artist like Shawn Mendes or Justin Bieber who got started on YouTube and they post their video, we want to be able to enable those new artists to break. But if you look at that, breaking artists or discovering the new, latest small musician is very different. If you're looking for something like cancer information, you don't want to see someone who is just posting information for the first time; when you're dealing with cancer, you want to see it from an established medical organization. And so what we've done is really fine-tune our algorithms to be able to make sure that we are still giving the new creators the ability to be found when it comes to music or humor or something funny or so many different categories, —beauty and crafts, learning, how-to, all these different areas. But when we're dealing with sensitive areas, we really need to take a different approach.
NT: All right. Let's move on to the governance question of this, since that is a big part of the forum today. Clearly there's a lot of conversation in the United States, but elsewhere too— you know, we've seen it in Australia. There's a lot of conversation about regulating the big social platforms. You are, I guess, lucky, or maybe unlucky that you haven't had to be subjected to a seven hour grilling in front of Congress. Congratulations on avoiding that. Tell me one idea that has traction for governing YouTube that you think is a terrible idea and one idea that has traction that you think is a reasonable idea.
SW: Oh. I mean, look, first of all, I want to say that I understand where governments are certainly coming from and we see so many different perspectives across governments. And I'd say generally we're really aligned with the overall approach. Of course we want to keep kids safe. We want to prevent violent extremism on our platform, and we want to keep our communities safe. So all the laws around, whether it's hate speech or child safety, we're working incredibly hard to figure out, how can we work to do everything we possibly can? I'd say the challenge comes when we get regulation that's very broad and is not well defined. So something that is harmful—like, what is hate, what is harmful, those are not things that are easily defined. And there are many, many different interpretations on that, depending upon what you're handling.
And so I'd say the challenge we have is when we have overly broad regulation that requires us to to potentially remove a lot of content that would not be good in the end for our users. And I will say there's a lot of regulation right now that's happening where people are—I mean, we have this issue with what was Article 13, now it's Article 17, the copyright regulation in Europe. And we were able to make a lot of work in terms of progressing it with policy makers. But that was a case where we were really, really concerned. If it had gotten too far, the way it had been written, we really would not have been able to enable so many channels on YouTube. So I get really worried about any kind of regulation that causes us to potentially take down large amounts of content that would hurt so many different creators, our small media companies. They represent a lot of diverse voices. There are [stories that] need to be told, they're creating a service with educational content, they're deploying jobs. And we just came out with all our GDP and job numbers, which are really impressive. So I worry, always, when I see regulation that would potentially cause us to hurt a lot of the growth that we've seen from the Internet.
And so I'd say we're aligned when it comes to keeping communities safe. We want to do everything we can and we want the definition of the language to be tight enough that we can actually comply in a way that is clear. And then we also have to just be really careful about the unintended consequences of some of the copyright or even, like, Section 230. Like, what could go wrong that could cause us to have to remove a lot of content that would be really devastating for the Internet and for the creative economy.
NT: Do you feel like it would be possible to reform Section 230 in a way that would still give you the ability to filter content and give you protection against the possibility that someone posts something offensive on your platform, but that would solve many of the problems that lawmakers have seen in that very antiquated piece of legislation?
SW: One of the challenges I have is that there are a lot of lawmakers who want us to remove more content and then a lot of lawmakers who want us to leave out more content. And so it's not really clear what is it that lawmakers want to solve for in the first place. And that makes it really challenging to be able to address. And so I think there are many ways to be able to address what the objectives are, and we'll certainly work closely with them to try to achieve those objectives. But right now, it's not clear exactly what those objectives are. There seems to be a lot of disagreement about it. So until that's clarified, it's hard for us to figure out exactly what are the right next steps. But I would say the next steps are to continue talking about it and continue to try to define that more clearly and come up with come up with solutions that will keep communities safe, but at the same time will enable the creative economy and the jobs and the education and the huge amount of valuable media to continue to flourish and grow.
NT: And what is an example of legislation that you've seen internationally that you think of as sensible, balanced and within the proper scope?
SW: Oh, I mean, I think maybe I'll start with, like, you know, the NETzDG, the first version of it had some really clear language around how we handle hate content and the need to remove that. And that is something that we also agree that we want to remove that type of content. We wound up actually first doing NETzDG and then later expanding our hate policies. And so in many ways, it was useful that we had done a lot of that legwork for NETzDG. So that would be an example of a regulation that was useful for us and we're aligned on it. I think, you know, there's certainly more policy that's coming there and we are in the process of understanding that and working through that. But the first version was helpful for us. And we don't want to have hate on our platform. We want to remove it and we want to remove it quickly. So that was something I think we were very aligned and we were able to work together on.
NT: Let me ask you a big global question. One of the most discouraging things to me about the world right now is the technological split between East and West, particularly between the United States and China. Do you see any path to rapprochement? Do you see any way that ultimately the United States and China are able to figure out the issues that divide them on tech and that YouTube actually is operating happily in China some number of years from now?
SW: Oh, I don't know. I mean, I'm not sure I'm the best person to ask about that because Google operated in China for a really short time. And so I'm not sure that I'm the right person to answer that question. I just see, what I see about YouTube is just the humanitarian good that we are. So I see us as a global public video library, and we have a huge amount of content that people can learn how to do anything, whether it's a skill or language or a musical instrument. You can research any kind of historical talk. You can see all the WEF talks here on YouTube. You can see all the TED talks. And so a lot of times I just feel sad if there's a population or a group who can't access that. And there could be many reasons for that. There could be policy reasons, but there also could be technological reasons—people who don't have access or they're not connected to the Internet or data is very expensive in their country. So I see the the value of being able to offer this library. And so hopefully in some ways there will be more bridges built in the future.
NT: Do you have a set of things that, if satisfied, would tell you that it's time to go back into China, or is it so far off in the distance and so out of the question at the moment that you don't even have that punch list?
SW: It's not something I'm working on at all right now. There's so many other things that I am working on. I have so many areas that I'm focused on: our product, our innovation, we launched a shorts product. I'm very focused on shopping, enabling more shopping on YouTube. I've also said that responsibility is my number one priority. And as you can see, we've made tremendous progress, but there's still a lot more work to do. So I'm very busy just making YouTube a better product.
NT: And I am very interested in seeing the violative content report and seeing whether you can get that number down from .15 to .17 down to .10.
SW: I think we will. I'll certainly say that's a goal of mine, is that we continue to lower that number and our team will continue to work on it. And measuring it is always the first goal, being transparent, measuring it. We also break down in our report just all the different ways of how it was flagged, was it flagged by machines, how quickly we took it down, what was the category that was removed for? So I do think that the transparency that we have is a really big step. And what I like about this metric a lot is that it encompasses a lot of the questions that regulators had. A lot of times they would talk about virality: oh, you had a video, but it got a lot of views really quickly. So all of that is encompassed in the violative video rate, VVR, for people to be able to understand. And all work that we do should bring that number down.
NT: Well, I hope that you bring that number down by becoming better at finding bad videos and not by lowering your standards, which would be another way to drop that number.
SW: No, actually we're raising our standard. I mean, that's the thing to remember, is that we have significantly raised our standard. I mean, just look at 2020. We had ten different regulations on COVID. We had a number of regulations around civics, election integrity. So we keep raising the bar and we need to actually make sure that our enforcement is even better while we're raising the bar. And that's a challenge. But we're staffed now. We have the people, the policies, the technology in place. So I do see the opportunity for us to really continue to improve that over time.
NT: I'm just saying that as a new CEO, I know that KPIs can influence behavior in funny, funny ways. But I hear what you're saying, and that sounds like exactly, obviously the right way to do it. OK, last note, we have about 30 seconds left. Tell me something exciting—actually, let me ask you this way. Are we going to be watching YouTube more on AR or VR a few years from now?
SW: Oh, I'd say AR. First of all, it's just so much potential. And I do think there's a lot of opportunity with AR in terms of modifying video, modifying creation of video, how we view videos. I love VR, but it's been hard to get the headsets and the content and get that ecosystem started. And so until there's a real breakthrough where one of them become a lot easier or cheaper, it's going to be hard for VR, but it will happen. There will be that breakthrough and it will happen. So in the meantime, I think there'll be a lot on AR, and AR can go a long way. I think we're going to see a lot of improvements with video and be able to improve our lives and have more tools with AR and have more fun. So I'm optimistic about the future there.
NT: All right, wonderful. Thank you so much, Susan Wojcicki. Let's all leave and go watch some high quality content on YouTube. And enjoy the rest of the event and come back later and come back tomorrow. Thank you.
SW: Thank you for having me.
Founder At Raisulweb | WordPress Website Designer & Digital Marketer | Freelancer
2moInternet Hosting Services in Australia More Details Info: https://tinyurl.com/linkedin1987
CEO at The Atlantic
2yAnd the video is here https://www.weforum.org/events/global-technology-governance-summit-2021/sessions/an-insight-an-idea-with-susan-wojcicki
Vice President, Cambr
2yGreat conversation!