Imagine being able to generate your research paper in the time it takes to brew your morning coffee. ☕️
Well, thanks to the rise of generative AI such as ChatGPT, this scenario isn't just a hypothetical future. It's practically the present. ✨
But here's the twist — it's not just about what AI can write, but also about who legally owns that writing. Behind the dazzling promise of generative AI lies a legal labyrinth still to be navigated.
Lucky for you, we found someone to throw the gavel at it. 👨🏻⚖️
To navigate this dynamic landscape, we turned to former lawyer and communication consultant Tom Hendrick who offers an illuminating perspective on the future of AI and scientific writing.
Join us as we delve into the nuanced world of copyright law, exploring the intersection of generative AI, and scientific publishing.
Before we jump in, Tom reminded us that the answer to any good legal question is ‘it depends’. He even joked saying “Take a drink every time I say the answer is it depends”. 🥂
So, as you delve into this blog, we suggest you read every answer with this preface. The content below is intended for informational purposes only and does not constitute legal advice. If you have any specific legal concerns, please consult with a professional legal advisor.
AI and Copyright
First up, let’s define copyright.
Put simply, copyright refers to the legal rights the creator has over an original piece of “copy” or work. 📝 It allows creators to have exclusive control over the use and distribution of their work for an allotted period of time.
But as we venture into the age of AI, the definition of what constitutes an "original piece of work" is evolving.
So what is copyrightable? Is it the idea or is it the words?
Tom Hendrick explained that copyright covers the specific text rather than the idea.
“I think most courts would agree that it's the words.”
But here’s where things get more complicated. The intricacies of copyright law extend beyond simple definitions. Tom elaborated that "the specific text that you've copyrighted has very short little T-rex arms that extend a little bit beyond the text."
So, while your copyright safeguards your distinct expression and indirectly protects the ideas, the reach is restricted — akin to T-rex's infamous tiny arms. 🦖
Thus, as we navigate the fascinating era of AI-generated text, it's vital to understand the limited yet decisive reach of yours and others copyright.
Who Owns the Copyright for AI-Generated text?
Well, the US courts have ruled that the products of generative AI, that being images, text or music, cannot be copyrighted. Meaning that AI-generated outputs are automatically public domain, which means they’re not protected by copyright. The court's decision was based on the idea that copyright is tied to human creativity, and since AI is not human, it cannot hold copyright. ⚖️
However, as AI combines existing information to create new content, it may inadvertently mimic an original source, resulting in a "Frankenstein work”. This could risk plagiarism and potential copyright issues.
This is why it is so important to verify AI-generated content for both accuracy and originality.
A cautionary tale of when a lawyer trusted in AI too much!
We know that ChatGPT and similar generative AI platforms are often “confidently wrong”, meaning they provide factually incorrect information with misleading confidence. 😬
But what are the consequences?
Well, Tom Hendrick shared a story about lawyer, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, who relied too heavily on AI for legal advice.
Schwartz prepared a legal brief for his client's personal injury case, which he then presented confidently in court. The kicker here however is that, well, the briefs were generated using ChatGPT! And to add salt to the wound, he didn’t check the information properly.
ChatGPT had completely fabricated 6 different legal cases, creating false quotes and attributing them to REAL judges. The consequence when he was found out? Schwartz was forced to write apology letters to each of the judges that he falsely cited, including the incorrect statements that he attributed to them, as well as pay a $5,000 penalty. 💵
This story serves as a caution to placing too much trust in AI without applying critical thinking and verification. 🚧
How to avoid legal consequences when using AI?
We are learning quickly that AI isn’t always reliable. ⚠️
Just like you wouldn't leave a lab experiment unattended, AI usage needs careful monitoring and thoughtful guidelines to ensure it is enhancing scientific communication and not undermining it.
The importance of verifying AI-generated information is undeniable. 🔍
The same critical thinking you would apply to a human's work, you should apply to AI-generated content. 💭 Fact check the content, quotes and any sources or references. AI can be a useful tool for speeding up the writing process, but it's a tool that requires human supervision.
Is Modifying AI-Generated Content Enough to Make It Yours?
Paraphrasing has long been considered a way to make content your own.
But when it comes to AI-generated text, Tom notes that modifying the text is likely not sufficient to claim copyright.
Moreover, if you or AI paraphrase copyrighted work you may still be in danger of what's called “passing off”! 😬 Passing off is a legal term that refers to the act of selling a product or service that is similar to another one legally protected by a trademark. It often occurs when a business uses a name, logo, or other distinguishing features that closely resemble those of a well-known brand or organization.
For example, if someone were to write a book called "The Pretty Good Gatsby" that is substantially similar to F. Scott Fitzgerald's "The Great Gatsby," they would be infringing on the trademark rights. This is because the use of a similar title could mislead or deceive the public into thinking that the two books are associated or endorsed by the original author or publisher.
Academia and education
How will AI affect the education system and academia?
Tom suggests that AI is likely going to reshape our education system. 🎓 In addition to potential copy right infringement issues, there is also academic dishonesty to consider.
In the academic setting, submitting AI-generated content as original work could be seen as academic dishonesty, akin to plagiarism. Even if the text is not a direct copy from another source, it isn't the student's original work or ideas.
Tom also mentions that another critical aspect to consider is the main objective of our teaching. 🤔
On one hand, if the goal is to teach critical thinking skills, which comes after the generation of the text; then the rise of AI could be celebrated as a tool that improves productivity and efficiency, much like the arrival of the typewriter did.
However, there's also a risk of sabotaging essential skills like discernment, innovation, creativity, and attention to detail. Much like how penmanship got markedly worse after the introduction of the typewriter. If over-relied upon, AI may lead to professionals skipping essential steps in their work.
Can I publish my manuscript if I have used ChatGPT or other generative AI?
Using AI to generate scientific manuscripts is a topic of controversy, with varying rules across different scientific journals. A number of scientific journals, including Science, have banned AI-generated text entirely. 🛑
Others such as Springer-Nature, have enforced certain rules, allowing it for tasks like proofreading and language enhancement with disclosure, while others have no limitations at all.
Make sure you check the journal’s publication requirements first, to ensure you can publish in your desired scientific journal.
Should AI regulations be standardised across all scientific journals?
Tom Hendrick acknowledged the potential benefits of uniformity across journals 📚 in terms of administrative ease and ethical use of AI; similar to the advantages of a universal language.
However, he also pointed out that such uniformity could infringe on personal or organisational freedoms. Currently, journal requirements are already so diverse with major differences in formatting, manuscript length, referencing style, submission process etc.
"Maybe it will just become part of a more complex landscape. Some journals go one way, some journals go the other way.”
Journals will likely want to maintain their rights and freedoms. As such we may continue to see a diverse landscape in scientific publishing, where some journals enforce strictly require human-generated content while others allow AI assistance.
But this brings us to a new conundrum: distinguishing between AI-written and human-written text. 😬💬
AI-Written vs. Human-Written Text
As AI continues to evolve, it becomes harder and harder to tell the difference between AI-generated and human-written text.
This ambiguity presents a set of unique challenges. How do we avoid mislabeling human work as AI-generated, and vice versa? And what are the implications of such mislabels? 🏷️
Let's delve into these specifics.
What legal ramifications might arise from the inability to distinguish between the two?
Tom discussed that the legal ramifications of not being able to distinguish AI-generated from human-generated text are complex, and that we as a society are still navigating this terrain. For example, if you receive a document, and you can't tell if it's AI-generated or human-written, it complicates matters like copyright.
Essentially, if you can't tell the difference, you're basing it on honesty. Trusting someone’s word when they say, "I wrote this, not ChatGPT."
We know that AI generated text, images, and music are public domain, meaning they cannot hold copyright. So being unable to differentiate between AI and human text complicates matters significantly, impacting not just scientific publishing and academia but also extending into numerous other fields and settings. 🔬🎨📄
Is “watermarking” AI generated content the solution?
The idea of watermarking AI-generated content is an interesting one. 🔖
Adobe Firefly, for example, already watermarks all its outputs as AI-generated. However, this is not a legal requirement, and there's nothing written in the law about it. Watermarking could be seen as a mark of good faith or good business ethics, ensuring that all outputs have a very indelible watermark on them.
This could help in cases where AI-generated work is used dishonestly, as it leaves a breadcrumb trail that could potentially be traced back. It's not foolproof, but watermarking could help differentiate between human-generated and AI-generated. Ultimately, this needs another declaration of honesty.
How do you think these approaches will change in the future?
As the use of AI in scientific publishing continues to evolve, societal attitudes towards it may also shift. The challenge lies in striking the right balance: harnessing the benefits AI offers and managing potential risks and ethical considerations. ⚖️
We can expect that the legal practices and policies currently in place will continue to change to meet the demands of AI!
Want to learn more about the ever-evolving landscape of science communication?
Science and the way we communicate continues to evolve each and every day, even more so with the advent of generative AI.
To learn more about the latest in communication strategies, head over to our FREE mobile science communication magazine: ✨ SWIPE SciComm ✨. From cutting-edge articles to expert interviews, SWIPE SciComm offers a treasure trove of insights for researchers and science communicators alike. 👨🏼🔬
Don't miss out — subscribe today and stay ahead of the curve!
A bit about Tom Hendrick
Tom Hendrick is a former lawyer with over 10 years’ experience. He left his legal career behind to follow his dream and become a communication consultant at TalentAcademy.
As a professional public speaker and trainer, Tom applies evidence-based theories and frameworks to coach and guide individuals on how to communicate most effectively and deliver impactful presentations. ✨