Category Archives: AI

US Proposes “Know Your Customer” Cloud Computing Requirements



Reuters reported the Biden administration is proposing requiring U.S. cloud companies to determine whether foreign entities are accessing U.S. data centers to train AI models U.S. Commerce Secretary Gina Raimondo said on Friday.

“We can’t have non-state actors or China or folks who we don’t want accessing our cloud to train their models,” Raimondo said in an interview, with Reuters. “We use export controls on chips,” she noted. “Those chips are in American cloud data centers so we also have think about closing down that avenue for potential malicious activity.”

The Biden administration is taking a series of measures to prevent China from using U.S.technology for artificial intelligence, as the burgeoning sector raises security concerns.

The proposed “know your customer” regulation was released Friday for public inspection and will be published Monday. “It is a big deal,” Raimondo said.

The proposal would require U.S. cloud computing companies to verify the identity of foreign persons who sign up for or maintain account that utilize the U.S. cloud computing through a “know-your-customer” program or Customer Identification Program.” It would also set minimum standards for identifying foreign users and would require cloud computing firms to certify compliance annually.

PCMag reported.Now, when OpenAI and other tech companies start new AI projects they’ll need to inform the US Government of their plans.

According to PCMag, The Biden administration has plans to use the Defense Production Act to require tech companies to let the government know when they train an AI model using a significant amount of computing power, Wired reported. The companies will also be required to provide information about the safety testing being done on the models they create.

The ruling making the requirement could happen as soon as next week.

Wired noted that the requirement will give the US Government unprecedented insight into sensitive projects going on inside Open AI, Google, Amazon, and other tech companies. The US government, for instance, will be the first to know when OpenAI begins work on GPT-5.

In addition to notifying the government of their projects, companies will also be required to share when a foreign company uses their technology to train a large language model.

Mashable reported that OpenAI, Google, and other AI companies will soon have to inform the government about developing foundational models, thanks to the Defense Production Act. According to Wired, US Secretary of Commerce Gina Raimondo shared new details about this impending requirement at an event held by Stanford University’s Hoover Institute last Friday.

“We’re using the Defense Production Act… to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results – the safety data – so we can review it,” said Raimondo.

According to Mashable, the new rules are part of President Biden’s sweeping AI executive order announced last October. Amongst the broad set of mandates, the order requires companies developing any foundation model “that poses a serious risk to national security, national economic security, or national public health and safety,” to notify the federal government and share the results of its testing.

Foundational models are models like OpenAI’s GPT-4 and Google’s Gemini that power generative AI chatbots. However, GPT-4 is likely below the threshold of computing power that requires government oversight.

Overall, I think it’s a good idea for the U.S. government to institute some control over who can access AI systems. Doing so might put an end to “deep fakes” that spread misinformation about what a celebrity didn’t do or what a President didn’t say.


US, Britain, And Other Countries Ink Agreement To Make AI “Secure By Design”



The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design,” Reuters reported.

In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.

According to Reuters, the agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first.

The Hill reported that the United States, along with 17 other countries, unveiled an international agreement that aims to keep artificial intelligence (AI) systems safe from rogue actors and urges providers to follow “secure by design principles.”

According to The Hill, the 20-page document, jointly published Sunday by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Security Centre, provides a set of guidelines to ensure AI systems are built to “function as intended” without leaking sensitive data to unauthorized users.

Other countries features in the agreement include Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland, and Singapore.

Last month, the Biden administration issued a sweeping executive order on AI focused on managing the risks of AI. The order includes new standards of safety, worker protection principles, along with directing federal agencies to accelerate the development of techniques so AI systems can be trained while preserving the privacy of training data.

iPhoneInCanada reported about the guidelines for artificial intelligence systems. The guidelines are divided into four key areas reflecting the stages of the AI system development life cycle. It’s pretty broad without anything specific:

Secure Design: This section focuses on the design stage, covering risk understanding, threat modeling and considerations for system and model design.

Secure Development: Guidelines for the development state include supply chain security, documentation, and management of assets and technical debt.

Secure Deployment: This stage involves protecting infrastructure and models, developing incident management processes, and ensuring responsible release.

Secure Operation and Maintenance: Post-deployment, this section provides guidance on logging and monitoring, update management, and information sharing.

In my opinion, it makes sense for there to be specific guidelines on how AI is used. The guidelines could be used by various countries, and should include protections for users – without leaking sensitive data to other users.


Sam Altman, Greg Brockman, and OpenAI Staff Will Join Microsoft



As the tech world watches Microsoft suck in top execs and AI engineering talent from OpenAI, the generative AI giant in which it already holds a minority stake worth several billion dollars, one question to consider is what, if anything, can competition regulators do about the visible flight of AI expertise and value into Microsoft’s commercial empire? TechCrunch reported.

According to TechCrunch, efforts by the OpenAI board to reinstate CEO Sam Altman immediately after ejecting him were reported over the weekend to have failed – with Altman opting to join Microsoft, along with president Greg Brockman and several leading AI engineers, as CEO of a new AI research division it’s spinning up. Which suggests the back-up plan is to recreate OpenAI in-house at Microsoft.

TechCrunch reported that a mass exodus of OpenAI staff to Microsoft also looks entirely possible – with hundreds of staffers signing a letter saying they “may” quit unless the startup’s board resigns and reappoints Altman and Brockman, along with two new independent directors.

CNBC reported that OpenAI is bringing in former Twitch CEO Emmett Shear to run the artificial intelligence company, two days after the sudden ouster of Sam Altman.

The hiring of Shear was the latest turn in a chaotic weekend at one of the most high-profile startups on the planet. OpenAI’s board announced late Friday that it was removing Altman and replacing him on an interim basis with technology chief Mira Murati. The post said that Altman “was not consistently candid in his communications with the board.” Altman will now be joining Microsoft to lead a new advanced AI research team.

According to CNBC, Shear confirmed overnight that he had accepted the role as interim CEO of OpenAI, which he described as a “once-in-a-lifetime opportunity.”

“I took this job because I believe that OpenAI is one of the most important companies currently in existence. When the board shared the situation and asked me to take the role, I did not make the decision lightly. Ultimately, I felt that I had a duty to help if I could,” he said.

Altman, meanwhile, is set to join Microsoft where he will lead a new advanced AI research team, Microsoft CEO Satay Nadella said late Sunday.

“We’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft… We look forward to moving quickly to provide them with the resources they need for their success,” Nadella said in a post on X, formerly Twitter.

The Verge reported that Microsoft typically only assigns a CEO title to leaders or big businesses inside the software giant. Xbox chief Phil Spencer was named Microsoft Gaming CEO recently, with Microsoft using or continuing CEO positions for a number of acquisitions, including GitHub, LinkedIn, Mojang, and Activision Blizzard. This could hint at bigger plans for Microsoft’s new advanced AI research team.

Personally, I find it amazing just how fast Sam Altman went from being fired to becoming an important part of Microsoft. I wonder how many of the OpenAI team, who want both Altman and Brockman back, will make the move to Microsoft.


Meta Reportedly Won’t Make Its Advertising Tools Available To Political Marketers



Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts, Engadget reported.

According to Engadget, at the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser’s video content. 

Reuters reported Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle. 

Meta’s decision to bar generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company “has not yet publicly disclosed the decision in any updates to its advertising standards.” Engadget reported that TikTok and Snap both ban political ads on their networks, and Google employees a “keyword blacklist” to prevent generative AI advertising tools from straying into political speech. 

Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntarily commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. According to Engadget, those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated.

Fortune reported that a month ago, Meta unveiled a set of generative AI tools for advertisers. “We believe these features will unlock a new era of creativity that maximizes productivity, personalization and performance for all advertisers,” enthused monetization infrastructure VP Matt Steiner at the time. 

According to Fortune, the social giant is now banning the tools’ use in making ads related to “housing, employment, or credit, social issues, elections, or politics, or related to health, pharmaceuticals or financial services.” This is so Meta can work on building “the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries.”

Google, of course, has also offered advertisers a set of gene tools. And like Meta, it’s trying to avoid their use by propagandists – per Reuters, a list of “political keywords” will be banned as prompts, and election-related ads will have to disclose “synthetic content that inauthentically depicts real or realistic-looking people or events.” 

Fortune wrote: Good luck enforcing that in the massive election year of 2024, if the enormous progress made by image generators in the last 23 months is anything to go by.

In my opinion, the use of AI on social media can be a dangerous thing, especially if it is used in political ads. This is likely why TikTok and Snap are ban political ads on their networks. There is too much potential for an AI-created political ad to be misleading.


TikTok Debuts New Tools And Technology To Label AI Content



As more creators turn to AI for their artistic expression, there’s also a broader push for transparency around when AI was involved in content creation, TechCrunch reported. To address this concern, TikTok announced today it will launch a new tool that will allow creators to label their AI-generated content and will begin testing other ways to label AI-generated content automatically.

According to TechCrunch, the company says it felt the need to introduce AI labeling because AI content can potentially confuse or mislead viewers. Of course, TikTok had already updated its policy to address synthetic media, which requires people to label AI content that contains realistic images, audio, or video, like deepfakes, to help viewers contextualize the video and prevent the spread of misleading info.

However, TechCrunch reported, outside of the extreme case of using AI to intentionally mislead users, some AI-generated content can toe the line between seeming real or fake. In this gray area, more transparency is generally appreciated by end users so they know whether or not the content they’re viewing has been heavily edited or created with AI.

Billboard reported TikTok announced new tools to help creators label content that was generated by artificial intelligence. In addition, the company said on Tuesday that it plans to “start testing ways to label AI-generated content automatically.”

“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” the company wrote. “Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”

According to Billboard, in July, President Biden’s administration announced that seven leading AI companies made voluntary commitments “to help move toward safe, secure, and transparent development of AI technology.”

One key point: “The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers fraud and deception.”

Engadget reported TikTok is rolling out a toolset that lets users label posts that have been created or enhanced by artificial intelligence. This move comes after the social media giant added a number of filters for video uploads that made heavy use of AI, and an image generator to help create unique backgrounds.

According to Engadget, the filters are being renamed to make it clearer which ones rely on generative AI to further assist with labeling. Moving forward, these filters will have “AI” in the name somewhere.

The new labels aren’t exclusive to TikTok approved filters, Engadget reported. You can slap the label on any content that’s been completely generated or significantly edited in AI, no matter where the content has been sourced from.

In my opinion, I think it is a good idea for TikTok to enforce the labeling of AI-content that has been posted to their platform. The labels should be clear enough to make it easy for viewers to understand that what they are seeing has been created by, or enhanced with, AI.


U.S. Rejects AI Copyright For Famous State Fair Winning Midjourney Art



The U.S. Copyright Office has again rejected copyright protection for art created using artificial intelligence, denying a request by artist Jason M. Allen for a copyright covering an award-winning image he created with the generative AI system Midjourney, Reuters reported.

The office said that Allen’s science-fiction themed image “Theatre D’opera Spatial” was not entitled to copyright protection because it was not the product of human authorship.

According to Reuters, the Copyright Office in February rescinded copyrights for images that artist Kris Kashtanova created using Midjourney for a graphic novel called Zarya of the Dawn dismissing the argument that the images showed Kastanova’s own creative expression. It has also rejected a copyright for an image that computer scientist Stephen Thaler said his AI system created autonomously.

Representatives for Midjourney did not immediately respond for a comment on the decision.

ArsTechnica reported that the US Copyright Office Review Board rejected copyright protection for an AI-generated artwork that won a Colorado State Fair art contest last year because it lacks human authorship required for registration. The win, which was widely covered in the press at the time, ignited controversy over the ethics of AI-generated artwork.

“The Board finds that the Work contains more than a de minimus amount of content generated by artificial intelligence (“Al”), and this content must therefore be disclaimed in an application for registration. Because Mr. Allen is unwilling to disclaim the AI-generated material, the Work cannot be registered as submitted,” the office wrote in its decision.

According to ArsTechnica, in this case, “disclaim” refers to the act of formally renouncing or giving up any claim to the ownership or authorship of the AI-generated content in the work. The office is saying that because the work contains a non-neglible (“more than a de minimus”) amount of content generated by AI, Allen must formally acknowledge that the AI-generated content is not his own creation when applying for registration. As established by Copyright Office precedent and judicial review, US copyright registration for a work requires human authorship.

The U.S.Copyright Review Board posted the following information:

On September 21, 2022, Mr. Allen filed an application to register a two-dimensional artwork claim in the Work. While Mr. Allen did not disclose in his application that the Work was created using an AI system, the Office was aware of the Work because it had garnered national attention for being the first AI-generated image to win the 2022 Colorado State Fair’s annual fine art competition.

Because it was known to the Office that AI-generated material contributed to the Work, the examiner assigned to the application requested additional information about Mr. Allen’s use of Midjourney, a text-to-picture artificial intelligence service, in the creation of the Work. In response, Mr. Allen provided an explanation of his process, stating that he “input numerous revisions and text prompts at least 624 times to arrive at the initial version of the image.” He further explained that, after Midjourney produced the initial version of the Work, he used Adobe Photoshop to remove the flaws and create new visual content and used Gigapixel AI to “upscale” the image, increasing its resolution and size. As a result of these disclosures, the examiner requested that the features of the Work generated by Midjourney be excluded from the copyright claim.

In my opinion, an art contest that was held at a State Fair was not the proper place to submit a piece of artwork that was nearly entirely generated by not one, but two AI generated content sites. The U.S. Copyright Office was correct to exclude Mr. Allen’s “Work” from winning the contest.


The Authors Guild Posted An Open Letter To Generative AI Leaders



The Authors Guild posted an open letter to generative AI Leaders calls on the CEOs of OpenAl, Alphabet, Meta, Stability AI, and IBM to obtain consent, credit and fairly compensate writers for the use of copyrighted materials in training AI.

The open letter allows people to add their own name to it. From the open letter:

To: Sam Altman, CEO, Open AI; Sundar Pichai, CEO, Alphabet; Mark Zuckerberg, CEO, Meta; Emad Mostaque, CEO, Stability AI; Arvind Krishna, CEO, IBM; Satya Nadella, CEO, Microsoft:

We, the undersigned, call your attention to the inherent injustice in exploiting our works as part of your AI systems without our consent, credit, or compensation.

Generative AI technologies built on large language models owe their existence to our writings. These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the “food” for AI systems, endless meals for which there has been no bill. You’re spending billions of dollars to develop AI technology. It is only fair that you compensate us for using our writings, without which AI would be banal and extremely limited.

We understand that many of the books used to develop AI systems originated from notorious piracy websites. Not only does the recent Supreme Court decision in Warhol v. Goldsmith make clear that the high commerciality of your use argues against fair use, but no court would excuse copying illegally sourced works as fair use. As a result of embedding our writings in your systems, generative AI threatens to damage our profession by flooding the market with mediocre, machine-written books, stories, and journalism based on our work.

In the past decade or so, authors experienced a forty percent decline in income, and the current median income for full-time writers in 2022 was only $23,000. The introduction of AI threatens to top the scale to make it even more difficult, if not impossible, for writers – especially young writers and voices from under-represented communities – to earn a living from their profession.

We ask you, the leaders of AI, to mitigate the damage to our profession by taking the following steps:

1 Obtain permission for use of our copyrighted material in your generative AI programs.

2 Compensate writers fairly for the past and ongoing use of your works in your generative AI programs.

3 Compensate writers fairly for the use of our works in AI output, whether or not the outputs are infringing under current law.

We hope you will appreciate the gravity of our concerns and that you will work with us to ensure that in the years to come, a healthy ecosystem for authors and journalists.

The Wall Street Journal reported that artificial-intelligence products such as OpenAI’s Chat GPT and Google’s Bard are trained in part on vast data sets of text from the internet, but it’s often unknown whether and to what degree the companies secured permission to use various data sets. Some tech companies say scraping information form the web is fair use.

According to The Wall Street Journal, the Authors Guild said writers have seen a 40% decline in income over the last decade. The median income for full-time writers in 2022 was $22,330, according to a survey conducted by the organization. The letter said artificial intelligence further threatens the profession by saturating the market with AI-generated content.

“The output of AI will always be derivative in nature,” Maya Shanbhag Lang, president of the Author’s Guild, said in a statement. “Our work cannot be used without consent, credit, and compensation. All three are a must.”

In my opinion, it is not morally acceptable to steal someone else’s writing work, without even attempting to ask for their permission, to feed that content to an AI. The writers whose work was included in that should be well paid for their words.