Transcript of webinar 'What Big Tech does to discourse, and the forgotten tech tool that can make tech less big' with Cory Doctorow 6:00pm GMT, July 1, 2020 Cory Doctorow: Hi, everyone. Thank you for attending. It's a pleasure as always to be at the Oxford Internet Institute. It's more of a pleasure when I'm physically present at the Oxford Internet Institute. But greetings from sunny Southern California. Like all of you, I am trapped indoors by a pandemic-- and, and, or maybe some of you are so-called "essential workers". But nevertheless, I am where I am because of the pandemic. And before the COVID crisis, there was another pandemic. One that we were very worried about, as much so, maybe, as we are now, although with maybe less intensity. Which was the epistemological pandemic: this collapse in our trusted institutions, and the rising conspiricism, and fear that our institutions would be hamstrung, and the important work that they do for our public good, whether that's climate change or communications infrastructure vaccination and public health, all of those have only become more salient as the pandemic has spread. And the received wisdom is, that the way that we ended up with this epistemological crisis-- this crisis not just in what we believe, but how we know whether something is something we should believe-- is that Big Tech did this by brainwashing us with algorithms. That Google set out not to make a mind-control ray, but to sell us a lot of fidget spinners, and they figured out that if they spied on us enough, and then fed the data into a machine learning algorithm, they could make anyone buy a fidget spinner. And then Steve Bannon and Cambridge Analytica came along and figured out that you could use this to make people racist, and they elected Donald Trump and Viktor Orban and the other sort of parade of horribles. And I think that story is wrong. But that doesn't mean that Big Tech is blameless. I think that everyone who's ever claimed to have made a mind control ray to have come up with a way to bypass people's critical faculties and instantly convert them to a belief or a course of conduct, either was kidding themselves, or they were kidding other people. They were charlatans. And that goes whether we were talking about the CIA and MK Ultra, or pathetic pickup artists on the internet who claimed that through neurolinguistic programming they can make women go home with them, or what have you. They're all either lying to themselves or lying to you or both. And so we should ask ourselves what it is that Big Tech is actually doing. What observable impact Big Tech is having on the world that we can directly attribute to Big Tech. And unquestionably, the thing that Big Tech does better than anything else is locate people with hard to find traits in the population. And I think that this facility, more than anything else, can explain the distorting impact that Big Tech has on our discourse. But as you'll hear, it's kind of a mixed bag. This people-finding ability is obviously very important to advertisers, not just for selling fidgets spinners. I often consider the plight of the poor refrigerator marketer who wants to sell you a product that the median consumer buys one or fewer of in their entire life, and where there are very few correlates to refrigerator purchase that you can use to target your advertising. I mean there's a reason that we see white goods ads on the motorway, on the way out of airports. The thinking goes, "people who fly on airplanes have money. Money is a thing you need to buy refrigerators, maybe they'll buy a fridge too." And this, you can see, is a very very low value way of getting people to buy refrigerators, and the conversion rate for a giant signboard on the way out of the airport for refrigerators is probably something like 0.0000001%. What Big Tech can do is find you some better correlates of fridge buying then a person who might have money. Like maybe a person who's looked at a refrigerator review or who's investigating kitchen remodeling, or who just looked up refrigerator repairs. Those still have a very weak correlation with buying a refrigerator. The conversion rate for those might only be 0.0001%, still far less than 1% but it might also be two or three orders of magnitude better than the conversion rate for that out by Heathrow or LAX or wherever you happen to be. So this facility for finding people who have hard-to-find hard-to-locate traits that are widely dispersed in the world, it has some really socially beneficial purposes. Beyond whatever social benefit you can attribute to more efficient refrigerator sales. If you have a hard-to-locate heterodox trait that you could face social sanction for revealing in public (like maybe you think that gender is not a binary), and you face discipline from your employer, or dismissal, or getting kicked out of your house, or (depending on where you live) being imprisoned or even killed for that view, Big Tech can help you find other people who have those views. It can help you both target them, and them target you, so you can come together and talk. And contrary to the transparent society story (that we will have more social acceptance, only when everyone is forced to reveal their true selves to the whole world), what we actually see is that the way we make social progress, the way we went from heroes like Alan Turing being stampeded into suicide for being gay, to official apologies to Turing for that really brutal regime, is that people who have a secret about their true identities are able to choose the time and manner of their revelation of those secrets. They can recruit allies to their cause, and slowly build a Solidarity Movement in support of their fundamental rights. And this ability to locate other people who have the same views as you and make common cause with them and slowly build up a movement, this is key to our social progress. But the other thing that Big Tech makes easy to find, is people who are primed to believe in conspiracies. People who are ready to believe that the earth might secretly be flat or that vaccines might secretly be making you sick, or that 5g is secretly going to poison you on behalf of Bill Gates or what have you. And this raises this important question. If Big Tech isn't the thing that makes you a conspiracist (it's rather the thing that allows conspiracists to locate other putative or potential conspiracicts, and recruit them into the conspiracy), why are people so ready to become conspiracists today in 2020? And I think the answer to that is that institutions have stopped doing their jobs. We live in a very complicated technical world where there are a bunch of really hard questions that we need to answer in order just to survive the day, from food safety to aviation safety to structural safety, terrorism, education, what have you. If you're going to be happy, and well, and prosper, and even just survive the next week, you need to be able to understand the best evidence and conclusions of very complicated technical questions that most of us are not qualified to answer for ourselves. And even if you're qualified to answer one of those questions, if you can go and read the journal articles about vaccines and decide that vaccines are safe-- as I believe they are, and I think you should believe they are-- that doesn't qualify you to then go and read the journal articles that tell you whether or not distance education programs are working as advertised. Or whether microplastics present a health risk to you. Or any of the other myriad disciplines that we have. And so, rather than asking each person to do their own research and come to their own conclusion about these difficult-to-navigate complicated technical questions, we convene these neutral truth-seeking exercises, these regulatory exercises backstopped by the Academy, where experts with competing claims and views gather together and have their evidence adjudicated by these neutral experts, who disqualify themselves if they have an interest, and who then give us their best view and recommendations, sometimes with the power of law in the form of regulation. And instead of doing this, increasingly, our experts just auction off the truth. A recent UK example would be when the former drug czar of the UK, David Nutt, concluded that the anti-abuse-of-drinking education programs (funded by the drinks industry themselves) were ineffective. And to test this hypothesis, he made two separate experiments, and in one group, people got the anti-binge-drinking education offered by the drinks industry. And then, the other group-- he made up his own anti-binge-drinking, and he found that his program worked and their program didn't. And moreover, that the majority of profit to the drink industry is attributable to dangerous levels of drinking, so they have a conflict of interest when it comes to curtailing that kind of drinking. And Nutt's evidence was just shelved. And today the UK drinks industry continues to make its own anti-binge-drinking education programs which are very ineffective. And so, when you have this kind of concentration in markets when there are just a couple of firms, it's easy for them to get together and decide on what the truth is going to be, and to impose that truth on their regulators, and have the regulators rubber stamp that truth. And so you see things like the climate emergency being deliberately sown with doubt about its sources and the potential remedies for it. Whether or not austerity will work is another source of enormous disinformation, and where best academic evidence is ignored in favor of parochial evidence that favors industries that benefit from austerity. Whether Boeing 737 Max is safe is an issue in which regulators decided to just let Boeing self certify with disastrous consequences. Whether firms should be allowed to merge with their major competitors, or to snap up nascent competitors and catch and kill them is an area where firms themselves have been left to largely regulate themselves without any oversight from competition bureaus. Or whether the gig economy is providing good real jobs. We actually have the cooked up figures for jobs that incorporate these sub-poverty wages people earn from the gig economy in order to declare that state policies that are meant to reduce unemployment are working, when in fact the jobs don't really qualify as jobs in the sense of making people economically independent and free from fear of homelessness and starvation and what-have-you. So, in other words, people believe in conspiracies because we are living in a golden age of conspiracies. People believe in conspiracies because they have been conspired against. We have this notion that the conspiracist is lazy. And maybe there are some people who just have these kind of incoherent ideas about what's going on. But including, but especially, the most outlandish conspiracies tend to be the ones with the most sprawling and esoteric evidence for them. When you ask people about 5G, they can emit a stream of scientific sounding reasons why they should be burning down 5G towers. Those reasons are wrong, but they're hard-won reasons. These are not people who had a 5G conspiracy theory sort of dropped on them lightly like a snowflake. These people dug out their whole driveways worth of blizzard snow to get at their 5G theory. And they have memorized chapter and verse, not just about 5G conspiracies, but about other instances in which regulators have failed to keep them safe. It is a conspiracy is to have an energetic mastery of wrong information. And sometimes that information in fact provides a good, not evidentiary basis, but a good fact pattern to support skepticism of a regulator. When you talk to anti vaxxers. They know an awful lot about, for example, the opioid epidemic: in which regulators around the world allowed a small number of extremely well-capitalized firms (that also hired many of the people out of those regulatory agencies or had their former executives go into those regulatory agencies) to sell painkillers, on the grounds that they were safe, that the risk of addiction was low, that people didn't need to worry about overdoses. And those regulators turned a blind eye as the bodies mounted. And now in the US the death toll from from the opioid epidemic exceeds the US death toll from the Vietnam War, and there's no end in sight. It's getting worse during the crisis, not better. So, the epistemological crisis is a crisis of faith in institutions, which are primarily caused by the institutions themselves, by institutional failures. We have, for example, a copyright directive that was passed in Europe last year, in which the expert evidence was that on the one hand, you wouldn't be able to keep users from uploading infringing materials to the internet, without using an automated filter. And on the other hand, that automated filter would catch unimaginable multiple Bodleian libraries of legitimate material, and doom it to copyright jail, from which it would probably never be released because that there just aren't enough skilled copyright adjudicators to look at every false positive and decide whether the material that was in it should be released out to the world. And the commission-- and in even organizations that I belong to (like the National Union of Journalists, which I'm a dues paying member of) have ignored all of this evidence, and instead created a rule that said, "you have to somehow respect the limitations and copyright for fair dealing and parody and commentary, but also stop all infringement. And you can't spy on everyone. But you-- so you shouldn't use a filter, but if a filter is the only way of doing it, then maybe you should use a filter..." This kind of incoherent mess, that as we're seeing countries around Europe implement the rules, are just gonna lead to a kind of broad-spectrum sloppy filtering, that is going to operate on presumption of guilt and censor millions of Europeans' speech, every day, presumably for the foreseeable future, unless there's some major reform. The other effect that this had was that it created a new Cap X barrier to entry for the tech industry. If you want to enter the European tech industry, float a platform to compete with the existing ones (which are almost entirely owned by large American firms), you now not only need to figure out how to convince users to come on board, and you not only have to figure out how to beat their network effects, you also have to have hundreds of millions of euros to spend to build and maintain these filters. I mean, YouTube's baby version of this filter, Content ID, cost 100 million dollars alone. And that's just sort of the tip of the iceberg for the kind of filtering that's going to need to be done. And not surprisingly, Big Tech quietly changed its mind about whether they were worried about this regime and briefed in favor of it. Both Facebook and YouTube briefed in favor of copyright filters, by the end of the debate. So the only thing worse than the pandemic is the pandemic at the moment in which public trust in institutions is at its lowest ebb. And this makes the COVID crisis far worse. The worse the coronavirus process gets, the less credible states and their truth-seeking appear, and the worse the epistemological crisis appears as well. So we're caught in this destructive positive feedback loop, where we don't believe in our governments, we don't follow their guidance, their guidance is itself flawed because they're subject to capture, that capture is easier to sustain because they don't have the credibility that would mobilize people to argue against the capture, lather, rinse, repeat. And you have no ability to field a response to the crisis. So this crisis will pass someday. We will attain... Oh, someone says "issue with my mic." Hmm. You know what, I'm just gonna switch off my... my jacket, my jacket is causing an issue. Oh, I see you think it's rustling... maybe. Level test 123. Could someone post in the chat if my mic sounds good? Okay, great. Yes. Sometimes turning it off and on again works. So, this crisis will pass. We'll have a vaccine or we'll have herd immunity or we'll have an effective therapeutic or some some combination thereof, but we will get more pandemics in the future. And our vulnerability to those pandemics-- both epistemological and virological-- will only increase as the years go on. As we're stockpiling N95s and ventilators for our next crisis, we also need to address this epistemological crisis. We need to give people a basis for trusting institutions, again, we need to restore evidence-based policy by changing the way our industries are structured. By making them more pluralistic. People look at the photo of Trump with the tech leaders around a boardroom table in Trump Tower shortly after the election. And they say, "that's ghastly, how could these handful of men and a few women who control our whole tech industry, possibly lend their credibility to Trump by meeting with him in Trump Tower?" That's a fair enough question to ask. But before we're done asking that question, let's also ask it how is it, that the whole tech industry is controlled by so few people, that they fit around a single table? Game theory tells us that when the number of players in the game drops and drops it becomes easier for them to collude against their referees. When there's only four or five players in an industry, it's easy for them to set aside their differences and come to accord on a single lobbying position that they can use to subvert our truth seeking exercises. And so we need to restructure our industries. We need muscular forms of anti-monopoly action that includes merger scrutiny, but it also includes prohibitions on vertical integration, limitations on how platforms can conduct their affairs-- firms should not be allowed to both offer a platform and compete with the users of that platform. It is so obvious that there's a conflict of interest, when the person who runs the only store you're allowed to sell things-- either because they dominate the market, or because they've used technical protection measures to ensure that, say, their app store is the only one that works for the platform, when they're also selling things in that store, the conflict of interest is so manifest that anyone who claims it doesn't exist is clearly batting for the other side. So that's one thing we need to do. We need to address this in the legal area. But there's also a technological means for addressing this dominance. And it's interoperability. Interoperability is writ large as kind of the root of all self determination. When you think of a demand by striking coal miners that they be given real money instead of company scrip that they could spend in a company store, what they were asking for was interoperable money. Money that worked everywhere, not just in one place. When you think of the demands that women made for economic independence separate from their husbands: the right to divorce, the right to own property, and so on. They were asking to be able to swap out some part of their life-- some institution in their life-- and stand up a new one, an economic independence one, in their life. No one is an island. No one is able to simply turn their back on the thing that we have and make a new thing. The French revolutionaries attempt to make a metric calendar, with 10 months in it, failed. The only way that we make change is by being able to graft new things on top of the things that currently exist. By being able to say "yes, I want to continue using this phone that I bought, but I don't want to make myself subject to the decisions of the people who run the app store for it." Or, "all of my friends are on Facebook; I don't care how good the alternative to Facebook is, because my friends are trapped on Facebook, I'll keep using Facebook, unless you give me a way to talk to them without using Facebook." Unless you give me an interoperable tool, comparable to the one that Facebook made in its own early years, where it made a tool that allowed people whose friends were all on MySpace, to give their password and login to MySpace to a Facebook bot that would visit MySpace on their behalf, scrape their waiting messages, put them in their Facebook inbox, and then push their replies back out to Facebook. That kind of interoperability was the core of how tech was dynamic for most of its existence. Over and over again we see interoperability not just as set by standards, but also this adversarial interoperability, where firms make new products that plug into existing ones against the objections of the people who made those products. Third party printer ink. Third party spares for your refrigerator. Third party parts for your phone or your laptop or your car. All of those elements are what allow new market entrants to prevail, even when an existing market player dominates. When they have network effects, when they have first mover advantage, as you just elide those advantages by piggybacking on them. By saying "all right, HP, you own the whole printer market, and you subsidize it by selling printer ink at a giant markup. Great, we'll just sell printer ink for your printers instead of our own." That's what a company called Static Controls did when they started making third party printer cartridges for Lexmark which was then a division of IBM. Today Lexmark is a division of Static Controls. Interoperability without permission from dominant actors is a key to this kind of dynamism and this kind of technological self determination. Now, like all of us, firms want to minimize risk. We want to find benefits pensions not market-based pensions, because we want the surety of knowing that when we get older, we'll be able to afford groceries and we won't have to eat cat food. Firms want to be able to control their competitors and their critics and their customers and prescribe their activities, so that they are never allowed to do things that violate their shareholders integrity. The hedge against you keeping your phone for longer, and undermining the profitability of the manufacturers hoping to sell you a spare, is to be able to limit who can repair that phone. So, the firm that manufactured it can unilaterally declare that your phone is now ready to become e-waste, and you must buy a new one, and in January 2019 Tim Cook told Apple shareholders that the major risk to Apple's profitability is that people weren't turning over their phones often enough. And that was the year after Apple defeated 20 state level Right To Repair bills across the United States in various state legislatures. And the hedge against the downturn in coal prices is running the company store so that even if the coal isn't as profitable you can at least extract monopoly rents from the coal workers who've been paid in your funny money. And tech supercharges this. Firms have always wanted to constrain customers and competitors. But if you buy old Edison reels, you'll see printed on the label, a thing that says "this is only to be played on an Edison phonogram under penalty of law." But Thomas Edison nor his patent enforcers could not come and visit your sitting room and find out what record you were spinning. Today technology can actually enforce that compliance with the so called Terms of Service, the agreement that you allegedly make by being dumb enough to buy a product and assuming (as I think any rational person would) that the 19,000 word novella of garbage legalese that came with the product has no force of law, given that no one can understand it and no one has ever read it. That garbage novella can become the actual law of the land. Our appliances, our medical implants, our thermostats, all of these are now being increasingly turned over to enforce policy against us. Today some colleagues of mine from the Electronic Frontier Foundation published a new report on what we're calling "bossware", which is the spyware that employers insist that employees run on their laptops, while they have turned some of their homes into rent free branch offices for their employer, that is advertised as being able to capture every mouse click, every direct message, every keystroke, that can be used to gather information that can be used to fend off wrongful dismissal suits, or head off unionization attempts. That is what tech can do. It can create this Unblinking Eye that can stop all violations of whatever Terms of Service were unilaterally imposed by someone who enjoys an enormous power imbalance. So every other fight we have turns on the ability of us to master our own technology to have technological self determination to either reconfigure our technology ourselves, or more likely, source from from third parties the tools that will allow us to reconfigure our devices. You don't need to reverse-engineer Lexmark's dodgy printer cartridge chip. You can just go to Static Controls who've done that work for you, and buy it from them. Or maybe your cooperative will do that work for you: we see, for example, organizations that represent people with sensory and physical disabilities, doing the hard work of reverse engineering file formats to make them more accessible. By contrast, the legal regimes assume that everyone will do that work for themselves. When the copyright directive (the first one) was passed in the EU in 2001, article six of it said that it would be illegal to bypass a technical protection measure, even for an otherwise permitted use. And the way that this has been implemented says that if you are blind, and you are permitted to bypass the technical protection measure in order to convert an ebook so that your screen reader can read it, in many EU member states, you are expected to do that work yourself. In fact I debated the Norwegian minister responsible for the implementation of the law in Norway, when it was brought in there-- and Norway's obviously not the EU, but they take on on EU regulation-- and I said, "honestly, is what you're saying is that if I'm blind, and I have an ebook, and I want to prepare it so that it's accessible by a screen reader, my remedy under your law is that I get a computer science degree, and then use that to discover a defect in Adobe's security, and then liberate the book from Adobe's wrapper, and then put it through my screen reader. And having done that, I'm not allowed to show anyone else how I did it; every other blind person has to do this for themselves?" and the minister said, "Yes, that is the way the statute works." So once we create technical protection measures, and imbue them with the force of law, so that removing a technical protection measure, even for a lawful purpose, is unlawful, then we foreclose on the possibility that someone will liberate our technology for us-- that a third party will take the action that's necessary to make the tools that we need work the way that we need them to work. So, if we are going to save ourselves, if we are going to liberate ourselves from this epistemological crisis, we must be able to have mastery of our technology. We must be able to run our own tools. Doing that means not only ending anti-circumvention rules and not only allowing people to violate Terms of Service, but also ending this harmful practice that we've embarked on, of deputizing big tech platforms to solve the social ills that they've created, we look at something like Facebook and we say "with 2.6 billion users, people are able to engage in harassment, racism, hate speech, terrorism, and so on. Therefore, Facebook, your job is to police all 2.6 billion of your users in order to end that." And that means that Facebook is now, forever destined to be so big that it can spare the money to moderate its own users conduct. In the same way that when AT&T was integrated into the, the apparatus of the US government, it meant that AT&T could not be broken up. In 1956, when the DOJ tried to break up AT&T, the Pentagon made them stop because they couldn't prosecute the war in Korea without them. So, our resilience to our future crises requires that we be able to adapt our tools to fast changing circumstances. And that right must be both restored, and made permanent, even and especially when it contravenes the interests of shareholders or dominant firms. And with that I'll turn it over to Ravi. Ravi? There he is. Hey! Ravi Naik: Hi. Thanks Cory, that was excellent. I was really interesting. I really enjoyed the talk... I'm sorry, right at the end there, my Wifi decided to cut out. The good and the bad of tech, great when it works, but... So, I've been lucky enough to have read your forthcoming book, which is effectively I think he just gave us a summary of, it's a fantastic book and I really encourage everyone to have a look at it when it comes out I think it's called "Working As Intended"? Cory Doctorow: Yeah, "Working As Intended: Surveillance Capitalism Is Not A Rogue Capitalism." Just finalizing the contract with the publisher, so watch this space; I think it might be premature to announce who the publisher is. Ravi Naik: OK! Cory Doctorow: I don't want to jinx it. It was with another publisher for seven months, who then basically shut down all forthcoming projects after the pandemic started; so I don't want to jinx it again. Yeah. Ravi Naik: Okay, well we can we can talk about how great it is. I also understand that there is a EFF summary of the issues in the report, which hopefully maybe we could share after this call. Cory Doctorow: Yeah, yeah. Our landing page for adversarial interoperability, which we are contemplating renaming... what were we going to call it? it's... sorry, "competitive compatibility," "Comcom," which is a lot more fun to say than adversarial interoperability which is-- it's hard to say. Ravi Naik: Right. I encourage everyone to read it. It's pretty fantastic, and thanks for giving us this talk today. And it's a real pleasure to be able to host your big session review on these topics, because I come at this from a quite different perspective. I'm a human rights lawyer by background, and developed my career to be able to now focus almost entirely on the dynamics between Big Tech and our society, and the way society deals with the issues that Big Tech throws up. And obviously the book and the paper really start to touch on issues of human rights and the social contract we have with tech. And I would like to ask a few questions that I've made to do with that. But also everyone that's listening in today please do put your questions into the question and answer box and we encourage you all to feel free to come in to answer. So, more maybe a higher level... it's more of a macro view. So, your broader views about our current situation. We've talked a lot, and you talk a lot in the piece ,about Shoshana Zubov's book "Surveillance Capitalism." And you explain a lot about how you agree with some what she said, but also have a few divergent views from her, and some of the things she says. Specifically about the way she come to have us about how we get through this. And the book, to me, that I think maybe speaks more clearly to the issues that you have, and I totally agree with, is Julie Cohen's book "Between Truth And Power," which you may have read. Cory Doctorow: I have not. Ravi Naik: I would say it is definitely worth reading. I highly recommend that to everyone listening, as well as you, Cory. Dr. Michael Veno, I was on a session with him earlier today, and others in call, and he pulls up a quote from that book that really encapsulates a few issues about what you're talking about. The quote is how "it interprets regulatory resistance as damage, and routes around it." How do you see that perspective on competition, particularly... Cory Doctorow: ell, I love to hear John Gilmore paraphrased that way. That's a paraphrase of Gilmore as "the internet interprets censorship as damage, and routes around it." It's quite a hoary and well-loved and often-used phrase in technological lore. I disagree, a little. I think that, especially, dominant firms have a complicated relationship with regulation. That, in terms of risk limitation, regulation is both a positive and a negative, because regulation can turn into a rule that says you have to look exactly like the dominant firm to enter the market. And regulation can create stakeholders for your dominance. Back to AT&T and the Bell System. AT&T's history with regulators starts with this firm that grew very quickly thanks to Gilded Age finance. It was it was basically the SoftBank story of its day. Giant amounts of capital pour into AT&T, and it's able to buy up its competitors and engage in Cap X expenditures that foreclose on other competitors entering the market. But it's not alone-- there are lots of other firms in the country that it's competing with. Smaller ones, particularly New-Deal era cooperatives-- farmers cooperatives-- that are the successors to the electrification co-ops. So the electrification of rural America was undertaken through the New Deal, through local self-determined cooperatives. The descendants of those cooperatives start telephone cooperatives. Many of them are literally making barbed wire telephones. They are connecting the telephone system to the barbed wire fences that run around the ranches, and using the barbed wire itself as the conduit for these party lines. And AT&T starts to just clobber these companies. They wont interconnect with them so they can make long distance calls (which in this case means calling the next city). They engage in anti competitive conduct, and so on. And the regulator steps in, and they impose on AT&T a bunch of interconnection duties. But, in exchange for it, they forestall any kind of breakup of AT&T. So they say "okay, you have to have some forbearance, in respect of these smaller competitors. But we are not going to remedy the obvious defect of allowing you to grow as big as you are, which is that you turned into this horrible bully that has already wiped out so many of the competitors that we're trying to protect. We're not going to break you up. We're not going to limit your lines of business and so on." And AT&T periodically just gets too big for its britches. It starts to, through it's Western Electric division, starts to become a real risk to the electronics industry, particularly to IBM. And so the regulator steps in again and says, "okay, you can't be a computer company either." And so AT&T is like, "fine, we're going to keep our labs." This is why when AT&T invented Unix, they never commercialized it. They just let everyone else make Unix, because AT&T couldn't compete. It couldn't couldn't enter the market. I'm sure there are people within AT&T whose fiefdoms were directly impacted by this. That the person who was in charge of destroying rural telephone co-ops, or commercializing Unix, was sad that their bonus wasn't as big as I could have been that year, because regulators were leaning on them. But I think, as a firm, AT&T looked at the fact that they were within a whisker of being broken up in 1956, and that as late as 1982, they were still intact. And the only reason for that is because they were a regulated monopoly that had been given these duties and the quid pro quo was this protection. And they said, "well, it's a small price to pay." Ravi Naik: That's a really interesting way to think about this, and think particularly about the governance of this space, and the dynamic we have between ourselves and tech companies. But a lot of your focus on the monopolies and structures of using trust law, and to focus on those that can be seen at the top of the skyline, the big ones playing this game. There are a bunch of smaller companies that sit in this ecosystem, and sit in this chain. And there's a significant amount of abuse, amongst the smaller companies. Abuse of personal data, abuse of the market, particularly in what I think can generously be called a toxic data swamp that is ad tech; the companies that sit in that ecosystem just to try to take data to profile us and then use those profiles. You have companies like Equifax and things like that on that system as well. But it's a smaller companies that cause a lot of concern particularly my wife, you see that some of the companies abuse individuals. It's so hard to hold those companies to account. It's that kind of dynamic. Cory Doctorow: Yeah. I completely agree, that when the ad tech side of Big Tech says, "We are the responsible actors," they're not kidding. They're playing an iterated game! They have executives whose addresses are a matter of public record. They're not like firms owned by holding companies in New Zealand owned by Scottish trusts, owned by Cayman trusts. They're like actual people whose faces are in the newspaper and a regulator knows how to lay hands on them if they need to. And they want to be around next year and the year after. They can't just shut down and reopen again tomorrow if they get shut down. So they're right when they say, "If you think we're bad. Take a look at the bottom feeders selling remnant ad inventory, or feeding the remnant ad inventory! This scraping the barrel market. Completely right. But then let's look at the poleconomy of the GDPR. You have this idea that the conduct of the ad tech industry is harmful and toxic. But rather than prohibiting it, we just create a bunch of conditions that turn into a Cap X wall. The compliance burden associated with operating an ad tech company is now so high that only Google and Facebook and a couple other companies can do it. If you look at the European tech market in the wake of GDPR, it's just Google and Facebook. The entire European ad tech sector, who were the worst actors in Europe, have been wiped out. But Facebook and Google go on abusing our data like crazy! And the reason that the regulator was not able to say "this conduct is offensive and dangerous, therefore we prohibit it," as opposed to "therefore we add this compliance burden and these regulatory like frameworks for consent..." I mean, whoever consented to be followed around on the internet? Saying, "Oh, you can follow people around on the internet but only if they give you their active continuous blah blah blah consent" is just an invitation to figure out how to trick people into into doing that. As opposed to saying "don't follow people around on the internet. That's creepy. Stop it." And the reason we couldn't say "don't follow people around that's creepy stop it", it that monopoly rents extracted by Google and Facebook leave a sufficient surplus, that they can lobby to maintain their position. They can hire the former Deputy PM of the UK on a rumored 4 million euros a year to brief for them in the halls of power. And the only way in terms of political economy to curb this negative conduct is to discipline the firms, either by breaking them up or making them face competition. And I'm all for regulating them in ways that do that. But I think that the illusion that if only we leave big firms they won't abuse us, is just that. It's an illusion. And again-- like back to AT&T. Who led the US mass domestic surveillance effort? AT&T. Without them, it never would have happened. They developed the Lawful Interception tools, they randomly installed them, they develop the best practices. They're the ones who had people cycling in and out of the intelligence agencies that could do it. It was their whistleblower Mark Klein who walked into my office in 2005 on Shotwell Street in San Francisco, and said "I'm a former AT&T engineer. My boss made me build a secret room in our Folsom Street switching center and put a beam splitter and AT&T's fiber backbone, so they could intercept the entire Internet's traffic. Big firms have this huge regulatory advantage, and unless we can actually strike it the conduct itself, as opposed to conditioning the conduct, then those big firms will always have an advantage, and they will multiply that advantage by using the conditions to exclude potential competitors from their market and increase their monopoly rents. Ravi Naik: Very interesting. Very interesting. I mean, I'm sure I can continue this conversation with you for hours. We've got a stream of questions coming in. I'm going to... Cory Doctorow: I see some good ones. Ravi Naik: ...take host advantages... Cory Doctorow: Right. Ravi Naik: ...to just ask one more question and pick you up on something you said. So you said "the only way to change this dynamic is through competition, more... or trust measures." Let's just look at ad tech again, If you can. So in my capacity as a human rights practitioner, we've brought the leading regulatory complaints across Europe, about the way real time bidding and ad tech operates. That was based entirely on the backbone of human rights principles and the regulators in Europe and particularly UK said, "This activity is unlawful. It cannot sustain in the way its currently operates." Unfortunately the regulator has not done much about it since. And I completely agree there's a problem with the way our regulators operate, and we were thinking about ways to hold the regulator to account for not holding companies into account. But there is a strong dynamic. Google changed the way they use real time bidding and the ad tech taxonomies, as a result of our work. And there's still are huge-- thousands of companies in Europe, working in the ad tech ecosystem, that will have to get to this level. But really it's the big companies who need to change the protocols the smaller companies operate on. So we're using human rights principles to change that market dynamic, and I'd love to hear your views on the Human Rights Framework. Particularly your story, or your story in your book, you say that surveillance capitalism has rejected data rights lawyers and activists as that chair rearranges, which is something I agree that is a criticism effect less than some I feel pretty strongly against so can you say some of your views on that. Cory Doctorow: Yeah, well, so I was specifically referring to the sort of mind control ray hypothesis. That there is this idea among leading theorists of surveillance capitalism, that because literally we have no free will when machine learning turns it's all-seeing eye on us, that anything that that tries to use privacy frameworks, or competition frameworks, or human rights frameworks is always going to fall short, because none of that stuff works against mine control rays. All that stuff all assumes that all the parties involved have free will and when we lose our free will to a greater or lesser extent, when machine learning is turned on bending our choices. To repeat: I think that's an overstated, over-hypothesized bit of... credulousness. It's taking Google sales brochures about how effectively it'll sell your fidget spinners, as peer-reviewed literature instead of as mere marketing puffery. In terms of impact litigation and using human rights frameworks and other legal frameworks as leverage points to bring down these big firms, I think that's right. I think that having a constitutional tradition, the value of a constitutional tradition (especially one with a lot of jurisprudence, which alas, is not characteristic of the European constitution and tradition, and so there are a lot of lacunae in how these rules should be applied), it is bedrock. It's a thing you can stand on so that you've got sound footing, that you can use in these kind of judo moves where you throw your much larger opponent over your shoulder, because you've got a sound footing to stand on. One of the reasons I moved back to the US, I'm a Canadian British citizen, who's now living in the US. And one of the reasons I moved here is because the extremely well litigated Constitution, with its wealth of jurisprudence, provides a lot of leverage for smaller activist organizations. You see this in the response to Trump. That he keeps having the courts turn over what he's done (to a greater or lesser extent, some of some of those victories have been symbolic and some have been real). And those victories have been delivered, because we know what the rules say. And we we have judges, even the most ideological of our judges who are in either kind of normative framework, or some kind of socialized environment in which they don't want to be seen as disrespecting the rules themselves-- in some way, the institution that they draw their personal worth from. And its rules become more important to them, at least sometimes, than their ideology. And I think we see that in the UK too. And, by contrast, in Canada, the compromise that got our Constitution was something called the Notwithstanding Clause, where the premier of a province can sign unconstitutional legislation, and if they add at the top of the legislation "notwithstanding the fundamental rights of Canadians to [some constitutional right]," they can then go on to violate our rights! The Premier of Ontario, his first act is sort of... this Trumpalike named Doug Ford, his first act, on taking office, was to unconstitutionally redistrict the City of Toronto to punish his political rival who is the mayor of Toronto. And he lost a constitutional challenge, and he rewrote the bill to say "notwithstanding the fundamental rights of Canadians to fair elections, I'm going ahead and doing this," and he gave a press conference that said, "I'm going to do this for every bill I introduce if necessary. My constituents didn't elect me to uphold the Constitution they enact they elected me to enact my program." And when I became a British citizen, I had to study the Unwritten Constitution and I concluded it was worth the paper it wasn't printed on. I think we've learned that. That the jumble that is the Unwritten Constitution is hugely problematic. That all said, where the rubber meets the road is regulation. Most of us will never deal with a court. But we have our daily lives impacted by regulatory frameworks, every day. The reason you don't drop dead food poisoning when you buy a sandwich from Pratt, is regulation. And regulation in the presence of concentrated power is much less effective. That regulation always turns on questions that don't have clear answers, where you have to adjudicate competing claims from experts. And regulation relies in part on a disorganized pluralistic industry that that cannot solve the collective action problem to converge on a single answer that says, "Microplastics are good for you. Your children's bedroom should have DDT impregnated wallpaper." You need defectors. You need substantial players in the industry who say "spying on people is bad." And we have that in the form of, say, Duck Duck Go on the internet side. But the problem is that the solution that regulators have come to in other domains, like the terror regulation, or the copyright directive, is in conflict with privacy. That you cannot satisfy the goals of the terror regulation (which is to identify and remove terrorist content within an hour on notice) without general monitoring, and without the presumption of guilt and no due process, ex-ante removal of content before any finding of unlawfulness. And that requires that you be able to see everything your users are doing and interdict it. And so, the firms that plumped for those rules-- the firm's that allowed those rules to come into place-- they argued against them, but when they were finally contoured and designed, they said, "Alright, let's have them. But please recognize that we can't satisfy the GDPR rules while we're satisfying the terror regulations rules." And as between those two, I think the terror regulation is going to win. Ravi Naik: Interesting. Cory, I could talk to you on hours on this stuff. This is what I do. This is the idea that gentle civilized sort of technology through the norms, the most fascinating thing. But I can't take all of this up to speed. We've got so many questions pouring in. Cory Doctorow: I see so many of them. Ravi Naik: Should I just read some of them out to you? Cory Doctorow: Sure. Ravi Naik: Or do you want to just answer them yourself. Cory Doctorow: No no, you go ahead. I have only been watching the scroll; I haven't been reading them. I'm not that good a multi-tasker. Ravi Naik: I am told we can stay on for a bit more if we want to take some of these questions. So the first one we've got here is "with the weaponization of social media, governments with their supporters using it to attack and harass critics, activists, human rights advocates, journalists, etc., just like we see in what's happening in the Philippines, how could citizens fight back." And that's from Jane Locus. Cory Doctorow: Yeah, it's a really important question. I think it is a bit like the question that the martial arts students asked of their Master, where they say "what if I find myself in a dark street with no working mobile phone and all the street lights are out, and I'm a little drunk and there's four giant angry men with knives? What do I do then?" And the Master's answer is "don't be on the dark street with no working mobile phone with the lights burned out, no one around, and the four guys with knives." The concentration of power into this handful of platforms, gives state actors the convenient place, a tractable way to both monitor and interdict the conversations of the opposition figures who use those platforms. And probably the best example of that isn't Philippines, it's Cambodia, because Cambodia illustrates not just how it can be used to do this kind of rule-breaking harassment where you have people who violate the Terms of Service by running bots that harass opposition figures, but how rule-following harassment can be an even more devastating attack on opposition figures. So in 2013, that Cambodian dictator narrowly won an election against a first-of-its-kind Facebook-organized opposition. And rather than the the kind of dumb dictator move of outlawing Facebook, he co-opted it and hired a bunch of Facebook consultants who knew, backwards and forwards, Facebook's extensive moderation rules. And so he created a system where his trolls would attack their opponents by saying things that were almost-but-not-quite-harassment by Facebook's own lights-- which, from the perspective of the person who's being harassed, almost-but-not-quite-harassment is indistinguishable from harassment. But, in terms of the rulebook that Facebook follows, they're actually different things, and one doesn't get you banned, and the other one does. And then they would lure their opponents into crossing the line over into official Facebook-defined harassment, and then report the harassment to Facebook and have these opposition figures removed. They would also use Facebook's rule-set like the real names rule (which is supposed to prevent harassment), and they would identify which dissidents were speaking under pseudonyms. And they would insist that they either unmask themselves and subject themselves to arrest and torture, or that Facebook remove them from the discourse. And so I think that the answer is that aggregating decision making power into one institution into Facebook (or a handful, Facebook and Twitter and a few others) itself creates an intractable problem in which states will have this advantage in attacking their opposition figures, and that the remedy has to be to weaken Facebook's grip. Now, I'm not saying "jam yesterday, jam tomorrow, no jam today." I think we can weaken Facebook's grip immediately without having to break up Facebook through adversarial interoperability. So I have a person I know who runs a community on Facebook for cancer previvors. These are people who've been identified as having a cancer gene that puts them at high risk of a potentially lethal very fast-moving cancer, who are in a group trying to decide what to do and how to manage this. And they were courted actively by Facebook about a decade ago, when Facebook was trying to bring in medical communities. And they discovered all kinds of problems with Facebook, including a bug that allowed anyone on Facebook to enumerate the full membership of any group on Facebook-- which Facebook did not characterize as a bug-- they characterized it as a "feature request" and declined to do anything about it. And so, they want to leave, but they can't, because they have a collective action problem. If 10 of them leave the other 200 are still there, and they can't bring them along. But you could imagine if we just changed the rules, so that Facebook was either required to expose its API's so they could run an off-Facebook community that could interact with the on-Facebook community, or if Facebook was stripped of the legal power to enforce its Terms of Service, anti-circumvention, and other rules that it uses to stop third parties from involuntarily federating a message board with Facebook. You could have an off-Facebook environment in which these discussions took place that were nevertheless able to take advantage of Facebook's ability to locate people had this hard-to-find trait-- the trait of being a cancer previvor. And I think that is the intermediate step. Ultimately, the problem isn't just that Mark Zuckerberg is uniquely unsuited to being the arbiter of 2.6 billion people's social lives. (He is uniquely unsuited to that.) The problem is no one is suited to have that job! While we are waiting to fell the Emperor and found some Republics, we can in the interim create these zones of technological self-determination that coexist with Facebook, and that don't rely on Facebook being solved before we can do anything else. Ravi Naik: Great answers, Cory. Cory Doctorow: I like Matt A's question about the ComCom here. Ravi Naik: Okay. I was gonna ask the question about the pandemic but I'll let you choose one. Cory Doctorow: Sure. Yeah. Matt A says, "is ComCom just overturning anti-circumvention regulation, or more proactive measures? Could we legislate Facebook to proactively create an API for Mastodon?" So I think that the way to understand the relationship of ComCom or adversarial interoperability, and interoperability more broadly, is that we might have some interoperability mandates. Those would be the floor we might say, as the Warner bill does in the US Senate, that Facebook is required to expose the same APIs that it uses for its own business units to talk to each other to their competitors through a layer of data fiduciaries. Through a layer of delegates who are regulated by an expert agency that prohibits them from entering any kind of complimentary business-- from commercializing the data and so on. We actually published a paper on this, this morning in the European context. My colleague, Christoph [Schmon], and his colleague Svea [Windwehr], published a paper that I just did a Twitter thread on this morning. You can find it if you look through my Twitter, or if you go to EFF.org. And that is the floor on what firms are required to do. And so, in the days of AT&T, when it was the Bell System, the first inklings of opening it up, were mandates to require it to interconnect with third party long distance services like MCI, which started to build microwave-based long distance instead of wire-based long distance. And so it was able to compete with Facebook, or with AT&T effectively. And that was the least AT&T was required to do. But something else happened to AT&T along the way, which is that two important court decisions took away its power to decide who could connect things to the Bell System. Facebook had, err-- *laughs* ("Facebook"). AT&T had asserted the absolute right to decide what could be mechanically or electrically coupled to the Bell System. And they use this as a racket. Your Bell phone was this black phone that you had to pay every month to rent, that would cost you like 100 times the cost of the phone over the life of the phone. And you weren't allowed to buy it outright. And no one else was allowed to provide a phone that could connect to it. And AT&T was so arrogant in enforcing this monopoly, that they finally disgusted the courts and had the right taken away. So first they went after a company called Harsha phone that made [*cups hands over mouth*] a plastic cup that fit over your phone receiver, so that when you were talking, people couldn't read your lips. And AT&T argued that the plastic cup on the end of the phone receiver endangered the integrity of the Bell System; it would stop them from coordinating emergency response in earthquakes. And a judge said, "you guys are idiots. We're taking away that power." And then they went after another competitor called Carterfone that made a radio relay so that ranch hands could answer the phone with a walkie talkie when they were out on the range, when the phone was ringing in the farmhouse. And when AT&T lost the ability to decide who could electrically or mechanically couple devices to the Bell System, you had on the one hand, these interconnect layers that were mandated like the MCI internet Connect layer, but then you had this this open space where, provided that you weren't actually breaking a law, like stealing long distance service, or running a criminal enterprise, you could connect anything you could think of to the Bell System. Well, what was the first thing that people connected of real significance to the Bell System? The modem. That got us the whole internet! Carterfone. This is the way to understand adversarial interoperability and mandated interoperability. Mandated interoperability is the very least a firm is required to do, but firms will still be able to extract monopoly rent and use their dominant position to engage in all manner of chicanery in respective new market entrants. The top of that is, provided you're not violating another law-- if you're evading their patents or their copyrights or their anti-circumvention or their trade secrecy or their non-compete agreements or any of this other junk that they have used to build a thicket around their business, to turn it into a wall or a moat around their business-- that's lawful. Then you have this future-proofed space, in which new entrants can think of new ways. No one had ever thought of basically singing data into a wire. That's what the modem was: we're going to sing the data in tones that computers can interpret and use song-- you know that "ran, ran, gedang gedang gedang"... that sound we all remember from the V.90-V.42bis-whatever days. No one had thought of that in the Bell System. That came from outside. And that gave us the Internet. And so that I think is the way to theorize those two different sets of rules. Ravi Naik: Cory, we've only got five minutes left, so I'll ask the last question I thought it's a great one and good for our times, and maybe my own interpretation of part of it. But you're probably uniquely placed to answer. So the question is, "has Big Tech had a good or bad pandemic?" And the follow up I want to just add in, is tech-solutionism has become a big part of our response to the pandemic. And it'd be good to get your views on both of those angles. Cory Doctorow: Sure. Well, okay, so if you're a monopolist, the pandemic is amazing, because there's going to be so much competition forbearance and lack of scrutiny. When you have a beloved firm that is a smaller firm that's failing because the pandemic, and the monopolist says "oh we'll buy you up and rescue you"-- see, for example, Google buying Fitbit. Fitbit's like the only successful wearable fitness tracker in the world. And people have been feeding it data like their menstrual cycles, and how much they drink. And employers required it as a condition of getting your full medical benefits. And now, Google is going to be able to ingest all of that data. And even some Fitbit owners are like, "well, if the alternative is that Fitbit goes away because of the crisis... I guess?" So they're gonna have a lovely lovely pandemic off the back of that. And as you say, bridging into your question about solutionism. We have decided that the completely untested idea of exposure notification is going to be the key to our exit from this. Even though everyone who has managed a graceful exit, like Iceland, say "oh yeah, the notification app was mostly a distraction. Maybe it helped a little at the margins." Norway made them delete it, because they couldn't figure out how to make it work. I mean, I think that back to institutional integrity. If we want people... if it turned out that exposure notification was an important, hugely important adjunct to contact tracing, we would still be stuffed, because the level of trust that people have in Big Tech, and in the regulators to rein in Big Tech, and in Big Tech's capacity to rein in regulators-- to not allow regulators to suborn them to conducting mass surveillance, or crackdown on protesters, or what have you-- is so low, that if it turned out that this was essential, the compliance would be next to nothing. I mean that's why they had to destroy the Norwegian data: the compliance was so low that they said "look. What you've done is, you've failed to capture any potential credible response to the virus that might arise from collecting data, because so few people have collected it. And nevertheless, those few people have now exposed themselves to an enormous privacy risk. So just throw it all away. Throw it all away." There are people who've done good work on what a hardware token might look like if it turns out this works, but it still requires that you trust their expertise. Bunnie Wang wrote a write-up of his theoretical work for the European Union on what a hardware-only contact tracing unit would look like. And it has lots of really clever flourishes, like it uses a standard 1000 miliwatt-hour battery. And he's like, you can only turn the radio on so often with this, before it runs out of battery. I can get... Governments might be able to pack more electronics into this if they think you're a dissident and do different kinds of surveillance, but they can't make a battery do things that a battery can't do. So lots of clever stuff, built into this thing. But you have to trust him. And then you have to trust the regulator to build the thing that he designed for them. And I think that our low trust environment has hampered all of that stuff and I think it will go on hampering it. I mean, this is why this crisis really matters to me-- why the epistemological crisis and corporate concentration that underpins it are so important right now. Because we're not at the end of our disasters. Climate change tells us that zoonotic and insect borne epidemics and pandemics are going to come in ever-increasing waves with ever-increasing severity, while the world is literally on fire and drowning. If we don't figure out how to operate in ways that enable us to trust experts, in these highly technical and contested environments, where the stakes could not be higher-- the stakes are literally the future of our species and our planet-- we are so doomed! And it all starts with making institutions accountable, which we cannot do when the firms they're supposed to be regulating have their power concentrated into just a few hands. When everyone in tech fits around one table at Trump Tower. And if I can finish I know we're over time I want to finish with with a hopeful note. Which is that it's not just tech that's concentrated. Finance, professional wrestling, eyewear, publishing, the record industry, shipping, they're all concentrated into a few hands. And while that might sound like bad news too, the good news is that it means that people who are angry that their favorite pro wrestlers are dying because they had their medical insurance taken away now that there's only one wrestling league, have the same cause as people who are angry that everyone in big tech fits around one table, and the same cause as people who are angry because Luxotica (this Italian company) owns every eyewear brand in high street eyewear vendor you've ever heard of. And before the term ecology was coined, there were people who cared about whales or owls or the ozone layer. But those were issues. The term "ecology" connected all those issues into a movement. And we are on the verge of this anti-monopolistic pro-pluralistic movement, that has the power to really radically alter who is on our side. And that I think is the thing that gives me the most hope.