In the realm of music, plagiarism is widespread but often challenging to identify. Despite the availability of specialized analysis programs, a recent experiment demonstrated that human listeners are more reliable in detecting music plagiarism compared to algorithms. While test subjects accurately identified 83 percent of the presented plagiarized pieces, algorithms achieved only a 75 percent accuracy rate. Consequently, scientists recommend maintaining human involvement in legal plagiarism proceedings while supplementing it with the use of algorithms.
The music industry frequently witnesses accusations of plagiarism, leading to legal battles. A recent example involves the song “Blurred Lines” by Pharrell Williams and Robin Thicke, who were ordered to pay nearly seven million euros in compensation for allegedly copying Marvin Gaye’s “Got to Give It Up.”
Challenges in Legal Proceedings
However, such plagiarism trials pose challenges. According to Yuchen Yuan from Japan’s Keio University and colleagues, the increasing frequency of legal disputes not only stifles musical creativity but also necessitates significant public funding to resolve these conflicts.
Moreover, human judgment can lead to erroneous conclusions, given the subtle boundary between mere similarities and genuine copying. Considering this, the question arises: would it be more cost-effective and impartial to have specialized algorithms decide on music piracy instead of relying on music producers and courts? After all, these systems excel at analyzing and comparing vast amounts of data.
Human vs. Machine
To assess the suitability of algorithms like PMI and Musly as judges in the music industry, Yuan and her team pitted them against 51 human listeners. Both algorithms and humans evaluated 40 pairs of songs involved in plagiarism disputes between 1915 and 2018. The task was to determine whether Song B had genuinely copied Song A or if the plagiarism allegations were unfounded.
While algorithms can automatically detect similar or even identical melody snippets, human participants relied mainly on their hearing and intuition—similar to jurors in a court. They were instructed to focus on whether “unique aspects of the plaintiff that are not common to the genre or music in general” were significantly present in the defendant’s song.
Human Listeners Prevail
Surprisingly, gut feelings can outperform ones and zeros. Human listeners’ decisions aligned with the court’s rulings in 83 percent of cases (33 out of 40 songs), while algorithms achieved only a 75 percent accuracy rate (30 out of 40 songs).
However, this result assumes the correctness of all 40 court decisions, some of which are highly controversial among fans and experts. For instance, the ruling that “Blurred Lines” is a plagiarism of “Got to Give It Up” is disputed. In this specific case, neither the study participants nor the algorithms strongly supported the court’s decision, according to Patrick Savage from the University of Auckland, a colleague of Yuan.
Additionally, the similarity between two songs alone does not necessarily constitute plagiarism. If the accused artist can prove that they could not have known the allegedly plagiarized song at the time of their composition, no copyright infringement occurred. Since algorithms can only detect similarities between songs, they overlook such nuances in their judgment.
Despite the findings, Algorithms Remain Useful for the Music Industry
“It’s fair to say that algorithms won’t be taking over any time soon,” summarizes Savage. However, even if AI systems and other computer programs do not assume the role of judges immediately, various other musical applications could benefit from them. For instance, Spotify is already experimenting with a plagiarism risk detector that could help artists automatically identify unintentional similarities with existing works before releasing new songs.
Furthermore, in legal plagiarism proceedings, songs could undergo initial machine scrutiny before jurors and judges form opinions. Their final decisions could then be based not only on intuition but also on objective data.
Source: Transactions of the International Society for Music Information Retrieval, 2023; doi: 10.5334/tismir.151)