Again, we remind policy makers that “standard technical measures” are not a silver bullet for anything

from for-the-umpteenth-time-still-no department

I’m starting to lose count of how many regulatory proceedings there have been over the past 6 months or so discussing “standard technical measures” in the context of copyright. Doing political work in this space is like living in a zombie movie version of “Groundhog Day” as we continue to marshal resources to confront this terrible idea that just won’t die.

The bad idea? That there is a silver bullet technology solution that can magically solve online copyright infringement (or any policy problem, really, but for now we’ll focus on how this idea comes back without ceases in the context of copyright). Because when policymakers talk about ‘standard technical measures’, that’s what they mean: that there must be some kind of technical magic that can be forced on online platforms to miraculously eliminate any content somewhat illicit that is on their systems and services.

It’s an illusion whose roots go back at least to the 1990s, when Congress wrote in the DMCA the requirement that platforms “host and […] not interfere with standard technical measures” if they were to be eligible for its Safe Harbor protections against potential liability for user infringements. Even then, Congress had no idea what these technologies would look like, and so defined them loosely, as technologies of some sort “used by copyright owners to identify or protect copyrighted works”. [that] (A) were developed in accordance with a broad consensus of copyright owners and service providers through an open, fair, voluntary, and multi-industry standards process; (B) are accessible to anyone on reasonable and non-discriminatory terms; and (C) do not impose substantial costs on service providers or substantial loads on their systems or networks. It is a description that even today, a quarter of a century later, corresponds precisely to zero technology.

Because, as we pointed out in our previous deposit in the previous policy study, no technology could meet all these requirements, if only on the fingerprinting front. And, as we pointed out in this deposit, in this policy studyeven if you could accurately identify the copyright works online, no tool can possibly identify offense. Infringement is an inherently contextual issue, and there is no way to load any kind of technical tool with enough information needed to be able to correctly infer whether a work appearing online is infringing or not. As we explained, it just won’t know:

(a) whether there is a valid copyright in the work (because even if such a tool could receive information directly from the Copyright Office records, registration is often presumptively granted , without necessarily testing whether the work is in fact eligible for copyright at all, or that the party making the recording is the party entitled to do so);

(b) if, even if a valid copyright exists, if it is a copyright validly asserted by the party on whose behalf the tool is used to identify the work or pieces ;

(c) whether any copyrighted work appearing online appears online pursuant to a valid license (of which the programmer of the tool may not even be aware); Where

(d) whether the work appearing online appears to be fair use online, which is the most contextual analysis of all and therefore the most impossible to pre-program accurately – unless, of course, the tool is programmed to assume it is .

Because the problem with presuming that fair use isn’t fair use, or that a non-infringing work is infringing at all, is that proponents of these tools don’t just want to be able to deploy these tools to say “oh look, here’s some potentially infringing content.” They want alerts from these tools to be considered definitive findings of infringement that will force a response from platforms to do something about them And the only answer that will satisfy these promoters is (at a minimum) removal of that content (if not also removal of the user, or even more) if the platforms are to hope to retain their safe harbor. this removal happens regardless of whether the material is actually infringing or not, as they also want this to happen without any proper ruling on this matter.

We already see the problem of platforms being forced to respond to every allegation of infringement as presumptively valid, as an uncontrollable flood of takedown notices continues to drive all sorts of expression offline that is in fact lawful. What these inherently flawed technologies would do is turn that flood into an even bigger tsunami, as platforms are forced to credit every claim they automatically issue every time they find an instance of a work. , regardless of the actual inaccuracy of such finding of infringement.

And this kind of censorship caused by law, forcing the expression to be deleted without there ever being a judgment that the expression is indeed illegal, deeply offends the First Amendment, as well as the law on copyright itself. After all, copyright is about encouraging new creative expression (and public access to it). But forcing platforms to respond to systems like this would be tantamount to deleting that phrase, and an utterly unnecessary thing to order for copyright law, be it in its current form under the DMCA or in one of the equally dangerous new updates on offer. And it’s a problem that’s only going to get worse as long as someone thinks these technologies are some kind of silver bullet to any kind of problem.

Filed Under: copyright, copyright office, dmca, standard technical measures, stm

Comments are closed.