Tag Archives: AI

U.S. Rejects AI Copyright For Famous State Fair Winning Midjourney Art



The U.S. Copyright Office has again rejected copyright protection for art created using artificial intelligence, denying a request by artist Jason M. Allen for a copyright covering an award-winning image he created with the generative AI system Midjourney, Reuters reported.

The office said that Allen’s science-fiction themed image “Theatre D’opera Spatial” was not entitled to copyright protection because it was not the product of human authorship.

According to Reuters, the Copyright Office in February rescinded copyrights for images that artist Kris Kashtanova created using Midjourney for a graphic novel called Zarya of the Dawn dismissing the argument that the images showed Kastanova’s own creative expression. It has also rejected a copyright for an image that computer scientist Stephen Thaler said his AI system created autonomously.

Representatives for Midjourney did not immediately respond for a comment on the decision.

ArsTechnica reported that the US Copyright Office Review Board rejected copyright protection for an AI-generated artwork that won a Colorado State Fair art contest last year because it lacks human authorship required for registration. The win, which was widely covered in the press at the time, ignited controversy over the ethics of AI-generated artwork.

“The Board finds that the Work contains more than a de minimus amount of content generated by artificial intelligence (“Al”), and this content must therefore be disclaimed in an application for registration. Because Mr. Allen is unwilling to disclaim the AI-generated material, the Work cannot be registered as submitted,” the office wrote in its decision.

According to ArsTechnica, in this case, “disclaim” refers to the act of formally renouncing or giving up any claim to the ownership or authorship of the AI-generated content in the work. The office is saying that because the work contains a non-neglible (“more than a de minimus”) amount of content generated by AI, Allen must formally acknowledge that the AI-generated content is not his own creation when applying for registration. As established by Copyright Office precedent and judicial review, US copyright registration for a work requires human authorship.

The U.S.Copyright Review Board posted the following information:

On September 21, 2022, Mr. Allen filed an application to register a two-dimensional artwork claim in the Work. While Mr. Allen did not disclose in his application that the Work was created using an AI system, the Office was aware of the Work because it had garnered national attention for being the first AI-generated image to win the 2022 Colorado State Fair’s annual fine art competition.

Because it was known to the Office that AI-generated material contributed to the Work, the examiner assigned to the application requested additional information about Mr. Allen’s use of Midjourney, a text-to-picture artificial intelligence service, in the creation of the Work. In response, Mr. Allen provided an explanation of his process, stating that he “input numerous revisions and text prompts at least 624 times to arrive at the initial version of the image.” He further explained that, after Midjourney produced the initial version of the Work, he used Adobe Photoshop to remove the flaws and create new visual content and used Gigapixel AI to “upscale” the image, increasing its resolution and size. As a result of these disclosures, the examiner requested that the features of the Work generated by Midjourney be excluded from the copyright claim.

In my opinion, an art contest that was held at a State Fair was not the proper place to submit a piece of artwork that was nearly entirely generated by not one, but two AI generated content sites. The U.S. Copyright Office was correct to exclude Mr. Allen’s “Work” from winning the contest.


YouTube Shares Principles For Partnering With Music Industry On AI Technology



YouTube Chief Executive Officer, Neal Mohan, posted “Our principles for partnering with the music industry on AI technology”. From the blog post:

Today, AI is moving at a pace faster than ever before. It’s empowering creativity, sparking new ideas, and even transforming industries. At this critical inflection point, it’s clear that we need to boldly embrace this technology with a continued commitment to responsibility. With that in mind, over the past few months I’ve spent time talking with AI experts working across YouTube as well as leaders in one of the most influential and creative forces in the world: the music industry.

For nearly our entire history, YouTube and music have been inextricably linked. As a hosting platform, YouTube connected fans worldwide and quickly became home for iconic music videos and breakout artists. Our deep partnership with the music industry has enabled us to innovate and evolve together – building products, features and experiences, from our YouTube Music and Premium subscription services, to global live-streaming capabilities, that spur originality and bring communities and fans even closer together.

Now, we’re working closely with our music partners, including Universal Music Group, to develop an AI framework to help us work toward our common goals. These three fundamental AI principles serve to enhance music’s unique creative expression while also protecting music artists and the integrity of their work…

Fortune reported that in the world of technology, sixteen years is an eon. That many years ago, Apple launched its first iPhone, and IBM created Watson. YouTube, which had just been acquired by Google, rolled out a groundbreaking tool that could identify copyrighted music within the videos that users uploaded to its site.

Now, in a remarkable indication of how much the world has changed since that time, YouTube has a new mission for its trusty copyright detection tool: to identify an expected deluge of songs composed by artificial intelligence.

According to Fortune, Mohan said the company will embrace AI wholeheartedly but responsibly. It will collaborate with artists and record labels to explore new ways to us AI in music, while also prioritizing protecting the creative works of artists, which includes continuing to develop its Content ID system.

But with so few guidelines and established best practices for the new era of generative AI, YouTube will be in uncharted waters. As it puts its plans into practice, YouTube’s approach to policing AI-generated music on its platform, as well as its success and struggles in the effort, is likely to have an impact that goes well beyond its own website, according to experts.

The Verge wrote that the quick background here is that, in April, a track called “Heart on My Sleeve” from an artist called Ghostwriter977 with the AI-generated voices of Drake and the Weeknd went viral. Drake and the Weeknd are Universal Music Group artists, and UMG was not happy about it, widely issuing statements saying music platforms needed to do the right thing and take the tracks down.

Streaming services like Apple and Spotify, which control their entire catalogs, quickly complied. The problem then (and now) was open platforms like YouTube, which generally don’t take user content down without a policy violation – most often, copyright infringement… So UMG fell back on something simple: the track contained a sample of the Metro Boomin producer tag, which is copywrited, allowing UMG to issue takedown requests to YouTube.

Personally, I am not interested in listening to music that was created by an AI, especially if that music was intentionally scraped from the internet to feed to the AI. I prefer supporting the musicians that make their work easily accessible on Bandcamp.


Snapchat’s My AI Goes Rogue, Posts To Stories



Snapchat’s My AI feature, an in-app AI chatbot launched earlier this year with its fair share of controversy, briefly appeared to have a mind of its own. On Tuesday, the AI posted its own Story to the app and then stopped responding to users’ messages, which some Snapchat Users found disconcerting, TechCrunch reported.

The Story My AI posted was just a two-toned image that some mistook to be a photo of their own ceiling, which added to the mystery. When users tried to chat with the bot, the AI in some cases replied to users by saying “Sorry, I encountered a technical issue.”

According to TechCrunch, though the incident made for some great posts, we regret to inform you that My AI did not develop self-awareness and a desire to express itself through Snapchat Stories. Instead, the situation arose because of a technical outage, just as the bot explained.

“My AI experienced a temporary outage that’s now resolved,” a spokesperson told TechCrunch.

However, the incident does raise the question as to whether or not Snap was considering adding new functionality to My AI that would allow the AI chatbot to post to Stories. Currently, the AI bot sends text messages and can even Snap back with images – weird as they may be. But does it do Stories? Not yet, apparently.

“At this time, My AI does not have Stories feature,” a Snap spokesperson told TechCrunch, leaving them to wonder if that may be something Snap has in the works.

ArsTechnica reported: It’s not Halloween yet, but some users of Snapchat feel like it is. On Tuesday evening, Snapchat’s My AI chatbot posted a mysterious one-second video of what looks like a wall and a ceiling, despite never having added a video to its messages before. When users asked the chatbot about it, the machine stayed eerily silent.

According to ArsTechnica, “My AI” is a chatbot built into the Snapchat app that people can talk to as if it were a real person. It’s powered by OpenAI’s large language model (LLM) technology, similar to ChatGPT. It shares clever quips and recommends Snapchat features in a way that makes it feel like a corporate imitation of a trendy young person chillin with its online homies.

Mashable reported that when reached for a comment, a Snap spokesperson confirmed that My AI had experienced an outage, but that it had been since resolved.

According to Mashable, the issue was not resolved immediately, as My AI temporarily continued to respond to at least some users’ text messages with: “Hey, I’m a bit busy at the moment. Can we catch up later?” However others soon reported that the My AI chatbot was back online, allowing them to question it about its strange story.

Personally, I think this situation is mostly harmless – despite freaking out some Snapchat users. That said, I can see why people had concerns after My AI appeared to post a photo of their wall and ceiling. There is something unnatural about having an AI bot post an image in a section of Snapchat that it wasn’t intended to use.


The Authors Guild Posted An Open Letter To Generative AI Leaders



The Authors Guild posted an open letter to generative AI Leaders calls on the CEOs of OpenAl, Alphabet, Meta, Stability AI, and IBM to obtain consent, credit and fairly compensate writers for the use of copyrighted materials in training AI.

The open letter allows people to add their own name to it. From the open letter:

To: Sam Altman, CEO, Open AI; Sundar Pichai, CEO, Alphabet; Mark Zuckerberg, CEO, Meta; Emad Mostaque, CEO, Stability AI; Arvind Krishna, CEO, IBM; Satya Nadella, CEO, Microsoft:

We, the undersigned, call your attention to the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.

Generative AI technologies built on large language models owe their existence to our writings. These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the “food” for AI systems, endless meals for which there has been no bill. You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.

We understand that many of the books used to develop AI systems originated from notorious piracy websites. Not only does the recent Supreme Court decision in Warhol v. Goldsmith make clear that the high commerciality of your use argues against fair use, but no court would excuse copying illegally sourced works as fair use. As a result of embedding our writings in your systems, generative AI threatens to damage our profession by flooding the market with mediocre, machine-written books, stories, and journalism based on our work.

In the past decade or so, authors experienced a forty percent decline in income, and the current median income for full-time writers in 2022 was only $23,000. The introduction of AI threatens to top the scale to make it even more difficult, if not impossible, for writers – especially young writers and voices from under-represented communities – to earn a living from their profession.

We ask you, the leaders of AI, to mitigate the damage to our profession by taking the following steps:

1 Obtain permission for use of our copyrighted material in your generative AI programs.

2 Compensate writers fairly for the past and ongoing use of your works in your generative AI programs.

3 Compensate writers fairly for the use of our works in AI output, whether or not the outputs are infringing under current law.

We hope you will appreciate the gravity of our concerns and that you will work with us to ensure that in the years to come, a healthy ecosystem for authors and journalists.

The Wall Street Journal reported that artificial-intelligence products such as OpenAI’s Chat GPT and Google’s Bard are trained in part on vast data sets of text from the internet, but it’s often unknown whether and to what degree the companies secured permission to use various data sets. Some tech companies say scraping information form the web is fair use.

According to The Wall Street Journal, the Authors Guild said writers have seen a 40% decline in income over the last decade. The median income for full-time writers in 2022 was $22,330, according to a survey conducted by the organization. The letter said artificial intelligence further threatens the profession by saturating the market with AI-generated content.

“The output of AI will always be derivative in nature,” Maya Shanbhag Lang, president of the Author’s Guild, said in a statement. “Our work cannot be used without consent, credit, and compensation. All three are a must.”

In my opinion, it is not morally acceptable to steal someone else’s writing work, without even attempting to ask for their permission, to feed that content to an AI. The writers whose work was included in that should be well paid for their words.


The Beatles Will Release A Final Record With John Lennon’s Voice



The music has analog roots, but now it’s being revived by futuristic technology: The Beatles have completed a new recording using an old demo tape by John Lennon, thanks to AI tools that isolate Lennon’s voice, according to Paul McCartney, NPR reported.

NPR provided a quote: “We just finished it up, it’ll be released this year,” McCartney, Lennon’s former bandmate, told the Today program on BBC Radio 4. It will be “the last Beatles record,” said McCartney, who along with Ringo Starr is one of the two surviving band members.

But, if you’re picturing McCartney sitting at a keyboard and telling ChatGPT, “sing a John Lennon verse,” that’s not what happened. Instead, they used the source material from a demo recording that Lennon made before his death in 1980.

“We were able to take John’s voice and get it pure through this AI, so that then we could mix the record as you normally do. So, it gives you some sort of leeway.”

McCartney says he realized technology could offer a new chance to work on the music after using Peter Jackson, the famously technically astute filmmaker, to resurrect archival materials for Get Back,his documentary about the band making the Let It Be album.

According to NPR, McCartney didn’t give details about what he says is The Beatles’ final record, poised to emerge decades after Lennon was shot and killed in December 1980.

In the interview, McCartney also said he’s concerned with how AI might be used going forward, given its ability to perform trickery like replacing one singer’s vocals with another person. 

“All of that is kind of scary,” McCartney said, “but exciting – because it’s the future.”

The Guardian reported that though McCartney did not name the song. It is likely to be a 1978 Lennon composition called Now and Then. The demo was one of several songs on cassette labelled “For Paul” that Lennon made shortly before his death in 1980, which were later given to McCartney by Lennon’s widow, Yoko Ono.

It was largely recorded on to a boombox as Lennon sat at a piano in his New York apartment. The lyrics, which begin “I know it’s true, it’s all because of you / And if I make it through, it’s all because of you,” are typical of the apologetic love songs Lennon wrote in the latter part of his career.

The idea to use AI to reconstruct the demo came from Peter Jackson’s eight-hour epic Get Back. For the documentary, dialogue editor Emile de law Ray used custom-made AI to recognize the Beatles’ voices and separate them from background noise.

It was this process that allowed McCartney to “duet” with Lennon on his recent tour, including at last year’s Glastonbury festival, and for new surround-sound mixes of the Beatles’ Revolver album last year.

The Guardian also reported that the news comes as controversy over the use of AI music continues to mount, with high-profile fakes of Drake, the Weeknd, and Kanye West receiving hundreds of thousands of streams before being scrubbed from streaming services.

Paul McCartney and Ringo Starr are the last of The Beatles. I love that some of their songs, that will be on their new album, were enhanced by AI. Typically, I’m not a fan of using AI to copy musician’s voices without permission. However, it appears that McCartney is excited to have AI enhance John Lennon’s voice to resurrect older songs.


Meta Open Sources An AI-Powered Music Generator



Meta has released its own AI-powered music generator – and, unlike Google, open-sourced it, TechCrunch reported.

Called MusicGen, Meta’s music-generating tool, can turn a text description (e.g. “An ‘80s driving pop song with heavy drums and synth pads in the background”) into about 12 seconds of audio, give or take. MusicGen can optionally be “steered” with reference audio, like an existing song, in which case it will try to follow both the description and melody.

Meta says that MusicGen was trained on 20,000 hours of music, including 10,000 “high-quality” licensed music tracks and 390,000 instrument-only tracks from ShutterStock and Pond5, a large stock media library. The company hasn’t provided the code it used to train the model, but it has made available pre-trained models that anyone with the right hardware – chiefly a GPU with around 16GB of memory – can run.

So how does MusicGen perform? TechCrunch said – certainly not well enough to put human musicians out of a job. It’s songs are reasonably melodic, at least for basic prompts like “ambient chiptunes music.” Writer Kyle Wiggers said the music is: on par (if not slightly better) with the results from Google’s AI music generator, MusicLM. But they won’t win any awards.

According to TechCrunch, it might not be long before there’s guidance on the matter. Several lawsuits making their way through the courts will likely have a bearing on music-generating AI, including one pertaining to the rights of artists whose work is used to train AI systems without their knowledge.

Music:)ally reported that MusicGen is described as “a simple and controllable music generation LM [language model] with descriptions of the music you’d like it to create, and it whips up 12-second samples in response.”

The first question for many rights holders will be: how was this trained. That’s explained in the accompanying academic paper.

“We use 20k hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10k high-quality music tracks, and on the ShutterStock and Pond5 music data” – referring to the popular stock-music libraries.

Meta joins other technology companies in developing (and releasing for public consumption) AI-music models. Alphabet recently unveiled its MusicLM, trained on around 280,000 hours of material from the Free Music Archive, and made it available for people to test out.

According to music:)ally, the music AI-models developed by OpenAI, Alphabet, and now Meta are research projects rather than commercial products at this point.

They’re more likely to become the basis for startups and developers to use than they are to signify a serious move into AI music by the bigger companies.

In my opinion, all of this is fine, until one of these AI music makers creates something that sounds like a Metallica song.


Google Cloud Partners With Mayo Clinic To Use AI In Health Care



Google’s cloud business is expanding its use of new artificial intelligence technologies in health care, giving medical professionals at Mayo Clinic the ability to quickly find patient information using the types of tools powering the latest chatbots, CNBC reported.

On Wednesday, Google Cloud said Mayo Clinic is testing a new service called Enterprise Search on Generative AI App Builder, which was introduced Tuesday. The tool effectively lets clients create their own chatbots using Google’s technology to scour mounds of disparate internal data.

In health care, CNBC reported, that means workers can interpret data such as a patients’ medical history, imaging records, genomics or labs more quickly and with a simple query, even if the information is stored across different formats and locations. Mayo Clinic, one of the top hospital systems in the U.S. with dozens of locations, is an early adopter of the technology of Google, which is trying to bolster the use of generative AI in the medical system.

Mayo Clinic will test out different use cases for the search tool in the coming months, and Vish Anantraman, chief technology officer at Mayo Clinic, said that it has already been “very fulfilling” for helping clinicians with administrative tasks that often contribute to burnout.

According to CNBC, generative AI has been the hottest topic in tech since late 2022, when Microsoft backed OpenAI released the chatbot ChatGPT to the public. Google raced to catch up, rolling out its Bard AI chat service earlier this year and pushing to embed the underlying technology into as many products as possible. Health care is a particularly challenging industry, because there’s less room for incorrect answers or hallucinations, which occur when AI models fabricate information entirely.

Recently, Google posted on The Prompt: “Let’s talk about recent AI missteps”. From the article:

…By now, most of us have heard about “hallucinations,” which are when a generative AI model outputs nonsense or invented information in response to a prompt. You’ve probably also heard about companies accidentally exposing proprietary information to AI assistance without first verifying that interactions won’t be used to further train models. This oversight could potentially expose private information to anyone in the world using the assistance, as we discussed in earlier editions of “The Prompt”…

Google also wrote a blog post titled: “Bringing Generative AI to search experiences”. From the article:

…For example, building search by breaking long documents into chunks and feeding each segment into an AI assistant typically isn’t scalable and doesn’t effectively provide insights across multiple sources. Likewise, many solutions are limited in the data types they can handle, prone to errors, and susceptible to data leakage…. Even when organizations make these efforts, the resulting solutions tend to lack feature completeness and reliability, with significant investments of time and resources required to achieve high quality results…

Google also points out that their Gen App Builder lets developers create search engines that help ground outputs in specific data sources for accuracy and relevance, can handle multimodal data such as images, and include controls over how answer summaries are generated. Google also indicates that multi-turn conversations are supported so that users can ask follow up questions as they peruse outputs, and customers have control over their data – including the ability to support HIPAA compliance for healthcare cases.

Personally, I would prefer to talk to an actual human being about whatever questions I might have about my health care needs. Giving this over to an generative AI, that could easily make mistakes or have “hallucinations”, sounds like a gimmick that could potentially cause harm to patients.