Videoage International October 2023

16 but will also be less inclined to trust any type of news (even legitimate ones). This trend will likely result in an even more polarized society and serious risks to the foundations of our democracy. As the generation of content is always based on existing text, sounds, or images, there are serious concerns about the potential copyright infringement of the material generated by the AI model either because it is based on content owned by others, or because it is duplicative of other AIgenerated content. As most models are unable to cite the sources used for the creation of the content, it is highly unlikely to have situations in which copyright owners can be identified as such. But assuming that we can, what are the legal implications of generating content based on copyrighted inputs? Will the model’s outputs be granted copyright protection as well? If so, who exactly will own those rights? The lack of citation is an issue not only from a copyright perspective; it is also concerning because it puts users in a position in which it is impossible to analyze the source of the information being generated. If users see only a customized summary, they do not have the option to form their own interpretation of what is being manipulated by the model. * Matteo Di Michele is a global technology and operations executive with 20 years of international experience across a variety of functions such as law, supply chain management, environmental sustainability, and artificial intelligence. He is also the author of Artificial Intelligence: Ethics, Risks, and Opportunities, currently number two on Amazon’s Hot New Releases Chart. tographs of unsuspecting social media users and depict them in embarrassing situations by just typing a description of the intended outcome via a chatbot. What is particularly alarming about AI-generated images is not just the fact that images can be so easily edited, which is not different from what people can do today with tools such as Photoshop, but the fact that it can be done instantaneously and on a large scale. The danger also includes AI-generated sounds. For example, AI can be used to generate a human voice to mimic a specific individual for fraudulent purposes. In a widely reported case, criminals were able to use AI to impersonate a CEO’s voice to obtain a fraudulent transfer of a large sum of money. Generative AI tools can also be used to increase the effectiveness and speed of misinformation campaigns. Millions of bots that appear like humans can be instantly deployed to push specific agendas with customized messages tailored to users’ profiles via tweets, posts, online conversations, emails, etc., in a sort of machine gun of deception and propaganda. Even greater damage can be done using the technology to influence elections and cause political instability via tools such as AI-generated social media posts or deepfake campaigns. In a world where fake texts, photos, sounds, and videos can be mass-produced so easily, quickly, and cheaply, not only will people be more frequently exposed to highly realistic fake information, (Continued from Page 14) October 2023 Tech Talks CONTENT Americas The top rated event in TV is back! Hilton Miami Downtown, 23-25 January 2024 The top rated event in TV is back! Hilton Miami Downtown, 23-25 January 2024

RkJQdWJsaXNoZXIy MTI4OTA5