AIs, Blade Runner, and Copyright

Vox published an article yesterday about some great work that a researcher is conducting on AI and video interpretation. To put it in terms that I can understand, he programmed an AI to watch Blade Runner and then recreate it. I know the feat is much more complicated and impressive than that, so I recommend you read the article and watch the example videos before reading on.

I won’t summarize more because I want to address the legal ramifications mentioned at the end of the article and in another one published by Techdirt. The issue at hand is how copyright law plays into this project.

While I am for the loosening of copyright restrictions and taking a broad view of fair use, I don’t think that this is as big a legal conundrum as these articles are making it out to be. When all is said and done, this is video synthesis in the same way that a fake guitar sound on a keyboard is audio synthesis. Let’s start there and analogize.

If I take a Led Zeppelin song–let’s say Kashmir because it’s still my favorite–and reprogram the whole thing using a software sequencer (i.e. synthesizer/sampler/workstation) like Reason, I could easily do a recognizable version of the song, however good or bad it turns out. This falls into the category of a cover song and would be covered (hehe) by a mechanical license.

Moving the line a little closer to this example–Say I program an AI to “listen” to Kashmir and then do its best, again using Reason, to create a new version of the song. Still a cover, right?

One more step over–I program an AI to take Kashmir, analyze it, then generate that cover using the other resources I have programmed into it (i.e. I program it to create its own synthesizer). How is this not still a cover song covered by a mechanical license?

Is there a video version of a cover song with automatic laws for licensing? Not that I know of.

Is there some other magic because there is an extra step, the AI, between your intent and the result? At this point, I would argue no. This is like setting up a camera with a motion sensor that gets triggered as an animal goes by, not like a monkey selfie.

Should you be found guilty of copyright infringement for generating and sharing? In my opinion, absolutely not. At the very least, the new video is probably protected by fair use.

But, imagine generating full length AI interpretations of movies and selling them. Under our current system of copyright law in the U.S., there is a good argument that this is infringement.

To be clear, I don’t believe that this should be the case, just that it probably is.

One final note of lawyerly arrogance–why does everyone keep quoting the researcher Terence Broad when he expressed what was essentially a legal opinion about whether or not this is infringement? He is entitled to his opinion, like anyone else, but it is not clear what his basis is. It’s just a nice quote about how this is new ground in the law.

Thoughts? I am open to changing my mind here, because this is super cool work.

AI Will Be Awful Because We Are

A day after going online, Microsoft had to pull the plug on its AI chatbot, Tay. The bot began making racist comments and talking about Hitler. Of course, racism was not built into Tay, but as those responses are governed by conversations with human users, things quickly devolved.

Microsoft is tweaking it (her?*) and will bring Tay back online soon. I imagine the digital lobotomy that they perform will result in the kind of obnoxious default refusals that most voice assistants exhibit.

“Alexa, f*!k you.”
“That’s not a very nice thing to say.”

The problem with this adjustment is that Tay may not pick up on the more subtle levels of sarcastic conversation. This is probably a good thing. I’d rather have my AI be a bit dumber than be prone to influence from the awful parts of the web.

Which brings me to the greater issue. As long as people are awful – or at least play awful people on the Internet – we increase the odds of creating robots that want to destroy us. Since we have solidly established that the awfulness isn’t going away anytime soon, we need either dumb AI or start preparing our tech-free bomb shelters.

* Another day, we can have a conversation about assigning a gender to our bots and how that plays out in the media.

Edit: After posting this, I listened to Motherboard’s great podcast episode, titled “Two Tales of AI.” I highly recommend.