16 Selling Access to Content Libraries to Train AI: A New “Window” or a Legal Minefield? By Anna Beke-Martos* By now it is common knowledge that genAI systems are capable of creating new content, including text, images, and yes, audiovisual content, too. However, genAI systems are only as good as the data they are trained on. In order for a genAI system to produce high-quality content, it needs to absorb huge amounts of training data. Consequently, most tech companies striving to develop genAI solutions capable of producing audiovisual content may well be interested in striking deals with production/distribution companies. For such enterprises, the libraries of larger production companies are a proverbial gold mine. While many in the motion picture industry remain critical of such agreements, citing fears of mid-to-long-term job losses, potentially diminishing content quality, and a general decline in the entire sector’s value, some producers may be tempted to regard these deals as a new window of opportunity, a new potential revenue stream. Considering the insatiable hunger of AI developers for training data, it may indeed seem like an unmissable opportunity for any producer to once again rekindle long-forgotten catalogs and make a little (or not so little) extra profit. The vast amount of material that has remained on the cutting room floor may appear particularly enticing to exploit in this manner. After all, is it not a sound financial decision to create value by selling access to otherwise arguably worthless content to tech companies that are willing to pay for it? The ethical, social, and long-term financial repercussions of such deals have all been addressed on various platforms. However, the biggest question mark must be next to the most fundamental legal question: Do producers truly hold the rights which are necessary to allow them to exploit their libraries in this manner? Is it legally possible that such a new revenue stream can be opened up with practically no additional cost? And, are production companies really protected by the contracts they entered into five, 10, sometimes 40 years ago? Opinions vary — depending on the language of each contract and the question of which country’s laws would be applicable to any potential dispute. In the U.S., there have already been several lawsuits in which authors of texts or images claimed that the training of AI systems on their writings or pictures constituted copyright infringement. Yet in those cases the defendants were tech companies, which had no contractual relationships with the authors who had sued them. Consequently, their defense relied on the “fair use” doctrine rather than on contractually acquired rights. Furthermore, the question of just how much these rights might be worth remained unanswered, not least because the tech companies’ direct commercial advantage from the (alleged) infringing use remained nebulous at best and completely speculative at worst. If, however, a production company were to license its library for the purpose of training an AI system, the legal situation would be rather different. First, the direct commercial value would be much easier to calculate. Second, the tech company acquiring this access would most likely demand a contractual warranty that the production company has cleared the rights to the content. Consequently, it will be up to the production company to defend any claims brought by screenwriters, directors, actors, composers, designers, etc. who may argue that since AIrelated rights did not exist at the time they made their contracts with the production companies, the production companies could not have and indeed did not acquire the right to exploit their contributions for the purpose of training AI systems. And, the creatives’ line of argumentation would continue, if the production companies have not acquired these rights, they cannot exploit the content by granting access to it to tech companies. Despite this risk, it appears that most U.S. production companies seem satisfied that if their contracts granted them all rights to exploit the content in “current and future windows,” then this can be interpreted to include the right to allow access for the purpose of training AIs. Whether this assumption is correct is debatable. First, a “window” in the film and TV industry refers to a time period. However, the use of content for the purpose of AI-training is not an issue of time of use, but rather a new method of use. The copyright laws of many countries (especially in Europe) require contracts between authors and users to specify the territory, time, method, and quantity of the use authorized by the author. For the event that the contract does not specify one or more of these factors, these should be interpreted narrowly, so that the author retains the broadest scope of rights. It is therefore questionable whether a contract granting a production company the right to use the content “for current and future windows” can indeed be interpreted to include the right to license the content for the purpose of training an AI system. Second, many countries’ laws protect authors by prohibiting any contractual provisions whereby authors license their works for such future methods of use which are unknown at the time the contract is made. However, if the novelty of a method of use lies merely in its enhanced efficiency, then such uses may nonetheless be licensed. Third, in most countries, the general principle is that contracts must be interpreted in accordance with the ordinary meaning given to the terms in light of how the contracting parties most likely understood them at the time they made the contract. Therefore, in case of a dispute, a lot would depend on the language of each individual contract and the circumstances in which they were made. Although all of the above factors indicate that production companies should act with extreme caution in examining historic contracts to see whether they may be interpreted to include these new rights, it is worth adding that lawmakers in Europe have a strong desire to support the proliferation of AI tools. As a part of this policy decision, they have required all E.U. member states to allow a fair use exception in their copyright legislation for text- and data-mining. However, there are several restrictions — one of them that the content should be “lawfully accessible.” So while the policy in Europe may support, in principle, the data-hunger of AI-developers, if production companies wish to strike deals, they may need to return to the negotiating table with the creatives, or in the very least, establish a scheme of remuneration for all of those people whose creative ideas and expressions will be sold to tech machines... the same ones who may one day replace them. *Anna Beke-Martos is a Hungarian attorney. The thoughts expressed in the above article are the author’s personal views and do not constitute legal advice. Any parallels between any actual cases and the hypotheticals discussed above are incidental. VIDEOAGE January 2025 AI Vs. HI-Human Intelligence
RkJQdWJsaXNoZXIy MTI4OTA5