Category Archives: AI

TikTok Debuts New Tools And Technology To Label AI Content

As more creators turn to AI for their artistic expression, there’s also a broader push for transparency around when AI was involved in content creation, TechCrunch reported. To address this concern, TikTok announced today it will launch a new tool that will allow creators to label their AI-generated content and will begin testing other ways to label AI-generated content automatically.

According to TechCrunch, the company says it felt the need to introduce AI labeling because AI content can potentially confuse or mislead viewers. Of course, TikTok had already updated its policy to address synthetic media, which requires people to label AI content that contains realistic images, audio, or video, like deepfakes, to help viewers contextualize the video and prevent the spread of misleading info.

However, TechCrunch reported, outside of the extreme case of using AI to intentionally mislead users, some AI-generated content can toe the line between seeming real or fake. In this gray area, more transparency is generally appreciated by end users so they know whether or not the content they’re viewing has been heavily edited or created with AI.

Billboard reported TikTok announced new tools to help creators label content that was generated by artificial intelligence. In addition, the company said on Tuesday that it plans to “start testing ways to label AI-generated content automatically.”

“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” the company wrote. “Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”

According to Billboard, in July, President Biden’s administration announced that seven leading AI companies made voluntary commitments “to help move toward safe, secure, and transparent development of AI technology.”

One key point: “The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers fraud and deception.”

Engadget reported TikTok is rolling out a toolset that lets users label posts that have been created or enhanced by artificial intelligence. This move comes after the social media giant added a number of filters for video uploads that made heavy use of AI, and an image generator to help create unique backgrounds.

According to Engadget, the filters are being renamed to make it clearer which ones rely on generative AI to further assist with labeling. Moving forward, these filters will have “AI” in the name somewhere.

The new labels aren’t exclusive to TikTok approved filters, Engadget reported. You can slap the label on any content that’s been completely generated or significantly edited in AI, no matter where the content has been sourced from.

In my opinion, I think it is a good idea for TikTok to enforce the labeling of AI-content that has been posted to their platform. The labels should be clear enough to make it easy for viewers to understand that what they are seeing has been created by, or enhanced with, AI.

U.S. Rejects AI Copyright For Famous State Fair Winning Midjourney Art

The U.S. Copyright Office has again rejected copyright protection for art created using artificial intelligence, denying a request by artist Jason M. Allen for a copyright covering an award-winning image he created with the generative AI system Midjourney, Reuters reported.

The office said that Allen’s science-fiction themed image “Theatre D’opera Spatial” was not entitled to copyright protection because it was not the product of human authorship.

According to Reuters, the Copyright Office in February rescinded copyrights for images that artist Kris Kashtanova created using Midjourney for a graphic novel called Zarya of the Dawn dismissing the argument that the images showed Kastanova’s own creative expression. It has also rejected a copyright for an image that computer scientist Stephen Thaler said his AI system created autonomously.

Representatives for Midjourney did not immediately respond for a comment on the decision.

ArsTechnica reported that the US Copyright Office Review Board rejected copyright protection for an AI-generated artwork that won a Colorado State Fair art contest last year because it lacks human authorship required for registration. The win, which was widely covered in the press at the time, ignited controversy over the ethics of AI-generated artwork.

“The Board finds that the Work contains more than a de minimus amount of content generated by artificial intelligence (“Al”), and this content must therefore be disclaimed in an application for registration. Because Mr. Allen is unwilling to disclaim the AI-generated material, the Work cannot be registered as submitted,” the office wrote in its decision.

According to ArsTechnica, in this case, “disclaim” refers to the act of formally renouncing or giving up any claim to the ownership or authorship of the AI-generated content in the work. The office is saying that because the work contains a non-neglible (“more than a de minimus”) amount of content generated by AI, Allen must formally acknowledge that the AI-generated content is not his own creation when applying for registration. As established by Copyright Office precedent and judicial review, US copyright registration for a work requires human authorship.

The U.S.Copyright Review Board posted the following information:

On September 21, 2022, Mr. Allen filed an application to register a two-dimensional artwork claim in the Work. While Mr. Allen did not disclose in his application that the Work was created using an AI system, the Office was aware of the Work because it had garnered national attention for being the first AI-generated image to win the 2022 Colorado State Fair’s annual fine art competition.

Because it was known to the Office that AI-generated material contributed to the Work, the examiner assigned to the application requested additional information about Mr. Allen’s use of Midjourney, a text-to-picture artificial intelligence service, in the creation of the Work. In response, Mr. Allen provided an explanation of his process, stating that he “input numerous revisions and text prompts at least 624 times to arrive at the initial version of the image.” He further explained that, after Midjourney produced the initial version of the Work, he used Adobe Photoshop to remove the flaws and create new visual content and used Gigapixel AI to “upscale” the image, increasing its resolution and size. As a result of these disclosures, the examiner requested that the features of the Work generated by Midjourney be excluded from the copyright claim.

In my opinion, an art contest that was held at a State Fair was not the proper place to submit a piece of artwork that was nearly entirely generated by not one, but two AI generated content sites. The U.S. Copyright Office was correct to exclude Mr. Allen’s “Work” from winning the contest.

The Authors Guild Posted An Open Letter To Generative AI Leaders

The Authors Guild posted an open letter to generative AI Leaders calls on the CEOs of OpenAl, Alphabet, Meta, Stability AI, and IBM to obtain consent, credit and fairly compensate writers for the use of copyrighted materials in training AI.

The open letter allows people to add their own name to it. From the open letter:

To: Sam Altman, CEO, Open AI; Sundar Pichai, CEO, Alphabet; Mark Zuckerberg, CEO, Meta; Emad Mostaque, CEO, Stability AI; Arvind Krishna, CEO, IBM; Satya Nadella, CEO, Microsoft:

We, the undersigned, call your attention to the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.

Generative AI technologies built on large language models owe their existence to our writings. These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the “food” for AI systems, endless meals for which there has been no bill. You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.

We understand that many of the books used to develop AI systems originated from notorious piracy websites. Not only does the recent Supreme Court decision in Warhol v. Goldsmith make clear that the high commerciality of your use argues against fair use, but no court would excuse copying illegally sourced works as fair use. As a result of embedding our writings in your systems, generative AI threatens to damage our profession by flooding the market with mediocre, machine-written books, stories, and journalism based on our work.

In the past decade or so, authors experienced a forty percent decline in income, and the current median income for full-time writers in 2022 was only $23,000. The introduction of AI threatens to top the scale to make it even more difficult, if not impossible, for writers – especially young writers and voices from under-represented communities – to earn a living from their profession.

We ask you, the leaders of AI, to mitigate the damage to our profession by taking the following steps:

1 Obtain permission for use of our copyrighted material in your generative AI programs.

2 Compensate writers fairly for the past and ongoing use of your works in your generative AI programs.

3 Compensate writers fairly for the use of our works in AI output, whether or not the outputs are infringing under current law.

We hope you will appreciate the gravity of our concerns and that you will work with us to ensure that in the years to come, a healthy ecosystem for authors and journalists.

The Wall Street Journal reported that artificial-intelligence products such as OpenAI’s Chat GPT and Google’s Bard are trained in part on vast data sets of text from the internet, but it’s often unknown whether and to what degree the companies secured permission to use various data sets. Some tech companies say scraping information form the web is fair use.

According to The Wall Street Journal, the Authors Guild said writers have seen a 40% decline in income over the last decade. The median income for full-time writers in 2022 was $22,330, according to a survey conducted by the organization. The letter said artificial intelligence further threatens the profession by saturating the market with AI-generated content.

“The output of AI will always be derivative in nature,” Maya Shanbhag Lang, president of the Author’s Guild, said in a statement. “Our work cannot be used without consent, credit, and compensation. All three are a must.”

In my opinion, it is not morally acceptable to steal someone else’s writing work, without even attempting to ask for their permission, to feed that content to an AI. The writers whose work was included in that should be well paid for their words.

The Beatles Will Release A Final Record With John Lennon’s Voice

The music has analog roots, but now it’s being revived by futuristic technology: The Beatles have completed a new recording using an old demo tape by John Lennon, thanks to AI tools that isolate Lennon’s voice, according to Paul McCartney, NPR reported.

NPR provided a quote: “We just finished it up, it’ll be released this year,” McCartney, Lennon’s former bandmate, told the Today program on BBC Radio 4. It will be “the last Beatles record,” said McCartney, who along with Ringo Starr is one of the two surviving band members.

But, if you’re picturing McCartney sitting at a keyboard and telling ChatGPT, “sing a John Lennon verse,” that’s not what happened. Instead, they used the source material from a demo recording that Lennon made before his death in 1980.

“We were able to take John’s voice and get it pure through this AI, so that then we could mix the record as you normally do. So, it gives you some sort of leeway.”

McCartney says he realized technology could offer a new chance to work on the music after using Peter Jackson, the famously technically astute filmmaker, to resurrect archival materials for Get Back,his documentary about the band making the Let It Be album.

According to NPR, McCartney didn’t give details about what he says is The Beatles’ final record, poised to emerge decades after Lennon was shot and killed in December 1980.

In the interview, McCartney also said he’s concerned with how AI might be used going forward, given its ability to perform trickery like replacing one singer’s vocals with another person. 

“All of that is kind of scary,” McCartney said, “but exciting – because it’s the future.”

The Guardian reported that though McCartney did not name the song. It is likely to be a 1978 Lennon composition called Now and Then. The demo was one of several songs on cassette labelled “For Paul” that Lennon made shortly before his death in 1980, which were later given to McCartney by Lennon’s widow, Yoko Ono.

It was largely recorded on to a boombox as Lennon sat at a piano in his New York apartment. The lyrics, which begin “I know it’s true, it’s all because of you / And if I make it through, it’s all because of you,” are typical of the apologetic love songs Lennon wrote in the latter part of his career.

The idea to use AI to reconstruct the demo came from Peter Jackson’s eight-hour epic Get Back. For the documentary, dialogue editor Emile de law Ray used custom-made AI to recognize the Beatles’ voices and separate them from background noise.

It was this process that allowed McCartney to “duet” with Lennon on his recent tour, including at last year’s Glastonbury festival, and for new surround-sound mixes of the Beatles’ Revolver album last year.

The Guardian also reported that the news comes as controversy over the use of AI music continues to mount, with high-profile fakes of Drake, the Weeknd, and Kanye West receiving hundreds of thousands of streams before being scrubbed from streaming services.

Paul McCartney and Ringo Starr are the last of The Beatles. I love that some of their songs, that will be on their new album, were enhanced by AI. Typically, I’m not a fan of using AI to copy musician’s voices without permission. However, it appears that McCartney is excited to have AI enhance John Lennon’s voice to resurrect older songs.

Where’s the Bot?

Wendy’s is automating its drive-through service using an artificial-intelligence chatbot powered by natural language software developed by Google and trained to understand the myriad ways customers order off the menu, The Wall Street Journal reported.

The Dublin, Ohio-based fast-food chain’s chatbot will be officially rolled out in June at a company-owned restaurant in Columbus, Ohio, Wendy’s said. The goal is to streamline the ordering process and prevent long lines in the drive-through lanes from turning customers away, said Wendy’s Chief Executive Todd Penegor.

According to the Wall Street Journal, Wendy’s didn’t disclose the cost of the initiative beyond saying the company has been working with Google in areas like data analytics, machine learning and cloud tools since 2021.

“It will be very controversial,” Mr. Penegor said about the new artificial intelligence-powered chatbots. “You won’t know you’re talking to anybody but an employee,” he said.

To do that, Wendy’s software engineers have been working with Google to build and fine-tune a generative AI application on top of Google’s own large language model, or LLM – a vast algorithmic software tool loaded with words, phrases and popular expressions in different dialects and accents and designed to recognize and mimic the syntax and semantics of human speech.

Gizmodo reported: AI chatbots have come for journalism, and now they are coming for our burgers. Wendy’s is reportedly gearing up to unveil a chatbot-powered drive-thru experience next month, with the help from a partnership with Google.

“Google Cloud’s generative AI technology creates a huge opportunity for us to deliver a truly differentiated, faster and frictionless experience for our customers, and allows our employees to continue focusing on making great food and building relationships with fans that keep them coming back time and again,” said Wendy’s CEO Todd Penegor in a statement emailed to Gizmodo.

According to Gizmodo, Wendy’s competitor McDonald’s has already been experimenting with ah AI drive-thru – to mixed results. Videos posted to TikTok illustrated just how woefully ill-prepared automation is at taking fast food orders, and how woefully un-prepared humans are to deal with it.

McDonalds began testing AI drive-thrus as June 2021 with 10 locations in Chicago. McDonald’s CEO Chris Kempczinski reportedly explained that the AI system had an 85% order accuracy. However, according to Restaurant Drive in June 2022, the company was seen an accuracy percentage in the low-80s when it was really hoping for 95% accuracy before a wider rollout.

The Register posted a title for its article that started with “Show us the sauce code…” also reported that Wendy’s and Google have together built a chatbot for taking drive-thru orders, using large language models and generative AI

According to The Register, the system works by converting spoken fast-food orders to text that can be processed by Google’s large language model. A generative component added to the system is designed to make the chatbot interact with people in a more natural and conversational manner, so that it’s less rigid and robotic.

The completed model was trained to recognize specific phrases or acronyms customers typically use when ordering, such as “JBC” describing Wendy’s junior bacon cheeseburger, “Frosties” milkshakes, or its combination meal “biggie bags.” Unsurprisingly, The Register reported, the chatbot, like human workers, will gladly offer to upsize meals or add more items to an order since it has been programmed to try and persuade hungry patrons to spend more cash.

The Register also reported that Wendy’s will try out its AI-powered drive-thru service in June at a restaurant in Columbus, Ohio. Up to 80 percent of orders are reportedly placed by customers at the burger slinger’s drive-thru lanes, an increase of 30 percent since the COVID-19 pandemic.

Personally, I’m of two minds about this. On the one hand, if the AI turns out to be really good at what it does, it could make the drive thru lines move faster. People don’t have to wait as long, and Wendy’s gets more money.

On the other hand, I have concerns that if the AI – eventually – will be used in every Wendy’s. That could result in less job opportunities for real-life, human, workers.

Embracing AI: Amazement Meets Anxiety

Over the past several weeks, I have been exploring quite heavily and using ChatGPT to develop marketing ideas, write customer surveys and even help brainstorm new product ideas. The results have been astounding and enlightening.

One task I completed saved me at least 8 hours of work, by creating a 48 questions survey based on some external data in just under 30 minutes. So it has made me wonder how the economics of this will play out.

I feel we stand at the dawn of a new era in technology. The rapid advancements in artificial intelligence (AI) are at an incredible pace, and I can only imagine what the landscape will look like in 2-3 years. The transformative potential of AI tools like ChatGPT is undeniable.

These tools have shown remarkable capabilities in tasks ranging from content creation to generating marketing ideas or even just creating a marketing survey, making my life easier in countless ways.

AI will cause a change in the labor market. AI cannot be ignored, and those who do will be left behind.

I will explore the reasons for excitement and apprehension while offering insights on balancing AI innovation and preserving human livelihoods.

My amazement surrounding AI arises from its potential to revolutionize virtually every aspect of our lives if you apply thought to it. ChatGPT, a prime example, has demonstrated unprecedented prowess in natural language processing (NLP), understanding complex language structures, and generating coherent, contextually relevant responses. It does not criticize my vocabulary either.

As a result, it can assist with tasks like customer support, content generation, and even complex problem-solving in a way that was once considered the exclusive domain of humans.

The fear, however, stems from the potential consequences of AI replacing human labor. Many jobs, especially those involving repetitive tasks or data analysis, could be rendered obsolete by AI systems that can perform these tasks with unparalleled efficiency and accuracy.

This shift will lead to widespread change in employment as people struggle to adapt to an economy where their skills are no longer in demand.

It’s essential to recognize that AI is not a zero-sum game. History has shown that introducing new technologies often creates new opportunities and jobs, even as it displaces some existing roles. Just as the disruptive business caused significant changes, this will force companies to shift and adopt.

Society needs to develop a proactive approach. With protective legislation as well as investing in education and retraining programs that prepare workers for the new opportunities AI will create businesses, and educational institutions must work together to ensure a smooth transition for workers whose jobs are at risk, providing them with the skills and resources needed to thrive in an AI-driven world.

There are many aspects of human intelligence that AI cannot replicate today, such as empathy, creativity, and interpersonal skills, but that could change going forward as the digital world does not have a lot of heart built in.

One thing is for sure, I feel as if I have gained a personal assistant in that I can tell it what I want, and thus far, it has produced the desired result in making a mundane task that would have been a significant time sink and allowed me to recover that time and move on to other projects.

Photo by Jan Antonin Kolar on Unsplash

Discord Restored Its Privacy Policies After Pushback

TechRadar posted an update from Discord in which the company backtracks about its previously announced changes. From the update:

UPDATE: Discord has updated the Privacy Policy that will take effect on March 27, 2023, adding back the statements that were removed and adding the following statement: “We may build features that help users engage with voice and video content, like create or send short recordings.”

A Discord spokesperson contacted TechRadar to provide the following statement: “Discord is committed to protecting the privacy and data of our users. There has not been a change in Discord’s position on how we store or record the contents of video or voice channels. We recognize that when we recently issued adjusted language in our Privacy Policy, we inadvertently caused confusion among our users. To be clear, nothing has changed and we have reinserted the language back into our Privacy Policy, along with some additional clarifying information.”

“The recently announced AI features use OpenAI technology. That said, OpenAI may not use Discord user data to train its general models. Like other Discord products, these features can only store and use information as described in our Privacy Policy, and they do not record, store, or use any voice or video call content from users.”

“We respect the intellectual property of others, and expect everyone who uses Discord to do the same. We have a thorough Copyright and Intellectual Property policy, and we take these concerns seriously.”

In addition TechRadar reported, the spokesperson asserts that if Discord’s policy “ever changes, we will disclose that to our users in advance of any implementation.”

Previously, Discord appeared to have updated some of the information in their “Information you provide to us” section. Originally, a portion of the “Content you create” section said: (in part) “We generally do not store the contents of video of voice calls or channels. If we were to change that in the future (for example, to facilitate content moderation), we would disclose that to you in advance. We also don’t store streaming content when you share your screen, but we do retain the thumbnail cover image for a short period of time.”

Sometime later, Discord changed the “Content you create” section to: “This includes any content that you upload to the service. For example, you may write messages or posts (including drafts), send voice messages, create custom emojis, create short recordings of GoLive activity, or upload and share files through the services. This also includes your profile information and the information you provide when you create servers.”

It was that change that caused many people to have concerns that their content would be used by Discord’s AI bots. I honestly considered removing my art from Discord. It is good that Discord clarified things a little bit – for example, stating that “OpenAI may not use Discord user data to train its general models.”

That said, when a company pulls shenanigans like Discord did – I find it difficult to trust them with my artwork. If you feel that way as well, one thing you can do is get on Discord and look for “Privacy & Safety”. It opens to a section where you can turn off Discord’s ability to use your data, and to track screen reader usage.