Tag Archives: artificial intelligence

Every US Federal Agency Must Hire A Chief AI Officer

All US federal agencies will now be required to have a senior leader overseeing all AI systems they use, as the government wants to ensure that AI use in the public service remains safe, The Verge reported.

According to The Verge, Vice President Kamala Harris announced the new Office of Management and Budget (OMB) guidance in a briefing with reporters and said that agencies must also establish AI governance boards to coordinate how AI is used within the agency. Agencies will also have to submit an annual report to the OMB listing all AI systems they use, any risks associated with these, and how they plan on mitigating these risks.

“We have directed all federal agencies to designate a chief AI officer with the experience, expertise, and authority to oversee all AI technologies used by that agency, and this is to make sure that AI is used responsibly, understanding that we must have senior leaders across out government, who are specifically tasked with overseeing AI adoption and use,” Harris told reporters.

The chief AI officer does not necessarily have to be a political appointee, though it depends on the federal agency’s structure. Governance boards must be created by the summer.

ArsTechnica reported the White House has announced the “first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits.” To coordinate these efforts, every federal agency must appoint a chief AI officer with “significant expertise in AI.”

Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. 

As chief AI officers, appointees will serve as senior advisors on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting “safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition,” the OMB said.

Engadget reported that Vice President Kamala Harris said, “I believe that all leaders of the government, civil society, and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from harm while ensuring everyone is able to enjoy its benefits.”

In my opinion, it sounds like the use of AI within the federal government is going to be something that is very closely looked at. Ideally, the AI officers should be people who really know what they are doing.

US, Britain, And Other Countries Ink Agreement To Make AI “Secure By Design”

The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are “secure by design,” Reuters reported.

In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse.

According to Reuters, the agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

Still, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first.

The Hill reported that the United States, along with 17 other countries, unveiled an international agreement that aims to keep artificial intelligence (AI) systems safe from rogue actors and urges providers to follow “secure by design principles.”

According to The Hill, the 20-page document, jointly published Sunday by the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency and the United Kingdom’s National Security Centre, provides a set of guidelines to ensure AI systems are built to “function as intended” without leaking sensitive data to unauthorized users.

Other countries features in the agreement include Australia, Canada, Chile, the Czech Republic, Estonia, Germany, Israel, Italy, Japan, Nigeria, Poland, and Singapore.

Last month, the Biden administration issued a sweeping executive order on AI focused on managing the risks of AI. The order includes new standards of safety, worker protection principles, along with directing federal agencies to accelerate the development of techniques so AI systems can be trained while preserving the privacy of training data.

iPhoneInCanada reported about the guidelines for artificial intelligence systems. The guidelines are divided into four key areas reflecting the stages of the AI system development life cycle. It’s pretty broad without anything specific:

Secure Design: This section focuses on the design stage, covering risk understanding, threat modeling and considerations for system and model design.

Secure Development: Guidelines for the development state include supply chain security, documentation, and management of assets and technical debt.

Secure Deployment: This stage involves protecting infrastructure and models, developing incident management processes, and ensuring responsible release.

Secure Operation and Maintenance: Post-deployment, this section provides guidance on logging and monitoring, update management, and information sharing.

In my opinion, it makes sense for there to be specific guidelines on how AI is used. The guidelines could be used by various countries, and should include protections for users – without leaking sensitive data to other users.

TikTok Debuts New Tools And Technology To Label AI Content

As more creators turn to AI for their artistic expression, there’s also a broader push for transparency around when AI was involved in content creation, TechCrunch reported. To address this concern, TikTok announced today it will launch a new tool that will allow creators to label their AI-generated content and will begin testing other ways to label AI-generated content automatically.

According to TechCrunch, the company says it felt the need to introduce AI labeling because AI content can potentially confuse or mislead viewers. Of course, TikTok had already updated its policy to address synthetic media, which requires people to label AI content that contains realistic images, audio, or video, like deepfakes, to help viewers contextualize the video and prevent the spread of misleading info.

However, TechCrunch reported, outside of the extreme case of using AI to intentionally mislead users, some AI-generated content can toe the line between seeming real or fake. In this gray area, more transparency is generally appreciated by end users so they know whether or not the content they’re viewing has been heavily edited or created with AI.

Billboard reported TikTok announced new tools to help creators label content that was generated by artificial intelligence. In addition, the company said on Tuesday that it plans to “start testing ways to label AI-generated content automatically.”

“AI enables incredible creative opportunities, but can potentially confuse or mislead viewers if they’re not aware content was generated or edited with AI,” the company wrote. “Labeling content helps address this, by making clear to viewers when content is significantly altered or modified by AI technology.”

According to Billboard, in July, President Biden’s administration announced that seven leading AI companies made voluntary commitments “to help move toward safe, secure, and transparent development of AI technology.”

One key point: “The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers fraud and deception.”

Engadget reported TikTok is rolling out a toolset that lets users label posts that have been created or enhanced by artificial intelligence. This move comes after the social media giant added a number of filters for video uploads that made heavy use of AI, and an image generator to help create unique backgrounds.

According to Engadget, the filters are being renamed to make it clearer which ones rely on generative AI to further assist with labeling. Moving forward, these filters will have “AI” in the name somewhere.

The new labels aren’t exclusive to TikTok approved filters, Engadget reported. You can slap the label on any content that’s been completely generated or significantly edited in AI, no matter where the content has been sourced from.

In my opinion, I think it is a good idea for TikTok to enforce the labeling of AI-content that has been posted to their platform. The labels should be clear enough to make it easy for viewers to understand that what they are seeing has been created by, or enhanced with, AI.

Elon Musk Created His Own Artificial Intelligence Company

Elon Musk has created a new artificial intelligence company called X.AI Corp. that is incorporated in Nevada, according to a state filing, The Wall Street Journal reported.

According to The Wall Street Journal, Mr. Musk is the only listed director of the company, and Jared Birchall, the director of Mr. Musk’s family office, is its secretary, according to the filing made last month. X.AI has authorized the sale of 100 million shares for the privately held company.

The business invokes the name of what Mr. Musk has described as his effort to create an everything app called X. Twitter, also owned by Mr. Musk recently changed its name to X Corp. The social-media company was also incorporated in Nevada instead of its previous domicile in Delaware, according to a legal filing last week. X Corp. has a parent company named X Holdings Corp.

The Wall Street Journal also reported that compared with Delaware, Nevada’s laws grant more discretion and protection to a company’s management and officers, according to legal experts.

As part of this AI ambitions, Mr. Musk has spent the past few months recruiting researchers with the goal of creating a rival effort to OpenAI, the artificial intelligence company that launched the viral chatbot ChatGPT in November, according to researchers familiar with the outreach. OpenAI has set off a fever pitch of investor interest in the sector.

The Wall Street Journal also reported: Mr. Musk has a long association with the X name. His former online banking startup that later became PayPal after a merger with another firm was called X.com. And he refers to one of his children as X.

The Hill reported Twitter owner Elon Musk founded a new artificial intelligence company named X.AI, according to a Nevada business filing from last month.

According to The Hill, Musk has been publicly skeptical of the future of artificial intelligence in the past and has even called for a complete AI development pause citing “risks to society” he says the technology poses.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” a group of tech experts, including Musk said in an open letter calling for the development pause last month.

The Hill also reported that Musk was a co-founder of Open AI, one of the leading artificial intelligence firms, but left the company in 2018, after a reported internal power struggle.

He has reportedly sought to build a rival to OpenAI, recruiting artificial intelligence engineers for a new venture for months.

Does the world really need yet another artificial intelligence chat bot? Personally, I don’t think so. However, Elon Musk clearly thinks that the world needs his X.AI artificial intelligence bot. I cannot help but wonder how long it will be before Elon Musk abandons that project.