“…we can’t pretend they don’t exist anymore.”

James Bridle (from YouTube transcript):

But the other thing, the thing that really gets to me about this, is that I’m not sure we even really understand how we got to this point. We’ve taken all of this influence, all of these things, and munged them together in a way that no one really intended. And yet, this is also the way that we’re building the entire world.

We’re taking all of this data, a lot of it bad data, a lot of historical data full of prejudice, full of all of our worst impulses of history, and we’re building that into huge data sets and then we’re automating it. And we’re munging it together into things like credit reports, into insurance premiums, into things like predictive policing systems, into sentencing guidelines. This is the way we’re actually constructing the world today out of this data.

And I don’t know what’s worse, that we built a system that seems to be entirely optimized for the absolute worst aspects of human behavior, or that we seem to have done it by accident, without even realizing that we were doing it, because we didn’t really understand the systems that we were building, and we didn’t really understand how to do anything differently with it.

There’s a couple of things I think that really seem to be driving this most fully on YouTube, and the first of those is advertising, which is the monetization of attention without any real other variables at work, any care for the people who are actually developing this content, the centralization of the power, the separation of those things. And I think however you feel about the use of advertising to kind of support stuff, the sight of grown men in diapers rolling around in the sand in the hope that an algorithm that they don’t really understand will give them money for it suggests that this probably isn’t the thing that we should be basing our society and culture upon, and the way in which we should be funding it.

And the other thing that’s kind of the major driver of this is automation, which is the deployment of all of this technology as soon as it arrives, without any kind of oversight, and then once it’s out there, kind of throwing up our hands and going, “Hey, it’s not us, it’s the technology.” Like, “We’re not involved in it.” That’s not really good enough, because this stuff isn’t just algorithmically governed, it’s also algorithmically policed. When YouTube first started to pay attention to this, the first thing they said they’d do about it was that they’d deploy better machine learning algorithms to moderate the content.

Well, machine learning, as any expert in it will tell you, is basically what we’ve started to call software that we don’t really understand how it works. And I think we have enough of that already. We shouldn’t be leaving this stuff up to AI to decide what’s appropriate or not, because we know what happens. It’ll start censoring other things. It’ll start censoring queer content. It’ll start censoring legitimate public speech. What’s allowed in these discourses, it shouldn’t be something that’s left up to unaccountable systems. It’s part of a discussion all of us should be having.

But I’d leave a reminder that the alternative isn’t very pleasant, either. YouTube also announced recently that they’re going to release a version of their kids’ app that would be entirely moderated by humans. Facebook — Zuckerberg said much the same thing at Congress, when pressed about how they were going to moderate their stuff. He said they’d have humans doing it. And what that really means is, instead of having toddlers being the first person to see this stuff, you’re going to have underpaid, precarious contract workers without proper mental health support being damaged by it as well. And I think we can all do quite a lot better than that.

The thought, I think, that brings those two things together, really, for me, is agency. It’s like, how much do we really understand — by agency, I mean: how we know how to act in our own best interests. Which — it’s almost impossible to do in these systems that we don’t really fully understand. Inequality of power always leads to violence. And we can see inside these systems that inequality of understanding does the same thing. If there’s one thing that we can do to start to improve these systems, it’s to make them more legible to the people who use them, so that all of us have a common understanding of what’s actually going on here.

The thing, though, I think most about these systems is that this isn’t, as I hope I’ve explained, really about YouTube. It’s about everything. These issues of accountability and agency, of opacity and complexity, of the violence and exploitation that inherently results from the concentration of power in a few hands — these are much, much larger issues. And they’re issues not just of YouTube and not just of technology in general, and they’re not even new. They’ve been with us for ages.

But we finally built this system, this global system, the internet, that’s actually showing them to us in this extraordinary way, making them undeniable. Technology has this extraordinary capacity to both instantiate and continue all of our most extraordinary, often hidden desires and biases and encoding them into the world, but it also writes them down so that we can see them, so that we can’t pretend they don’t exist anymore.

We need to stop thinking about technology as a solution to all of our problems, but think of it as a guide to what those problems actually are, so we can start thinking about them properly and start to address them.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s