Tag Archives: AI

AMD Buys AI Equipment Maker For Nearly $5 Billion



Advanced Micro Devices agreed to pay nearly $5 billion to ZT Systems, a designer of data-center equipment for cloud computing and artificial intelligence, bolstering the chip maker’s attack on Nvidia’s dominance in AI computation, The Wall Street Journal reported.

The deal, among AMD’s largest, is part of a push to offer a broader menu of chips, software and system designs to big data-center customers such as Microsoft and Facebook owner Meta Platforms, promising better performance through tight linkages between those products.

Secaucus, N.J.-based ZT Systems, which isn’t publicly traded, was founded in 1994. It designs and makes servers, server racks and other infrastructure that house and connect chips in the giant data centers that power artificial-intelligence systems such as ChatGPT.

AMD posted a press release titled: “AMD to Significantly Expand Data Centers AI Systems Capabilities with Acquisition of Hyperscale Solutions Provider ZD Systems”

Strategic acquisition to provide AMD with industry-leading systems to expertise to accelerate deployment of optimized rack-scale solutions addressing $400 billion data center AI accelerator opportunity in 2027 —

ZT Systems, a leading provider of AI and general purpose compute infrastructure for the world’s largest hyper scale providers, brings extensive AI systems expertise that complements AMD silicon and software capabilities.

Addition of world-class design and customer enablement teams to accelerate deployment of AMD AI rack scale system with cloud and enterprise customers.

AMD to seek strategic partner to acquire ZT System’s industry-leading manufacturing business.

Transaction expected to be accretive on a non-GAAP basis by the end of 2025…

Reuters reported AMD said on Monday it plans to acquire server maker ZT Systems for $4.9 billion as the company seeks to expand its portfolio of artificial intelligence chips and hardware and battle Nvidia.

AMD plans to pay for 75% of the ZT Systems acquisition with cash and the remainder in stock. The company had $5.34 billion in cash and short-term investments as of the second quarter.

The computing requirements for AI have dictated that tech companies string together thousands of chips in clusters to achieve the necessary amount of data crunching horsepower. Stringing together the vast numbers of chips has meant the makeup of whole server systems has become increasingly important, which is why AMD is acquiring ZT Systems.

The addition of ZT Systems engineers will allow AMD to more quickly test and roll out its latest AI graphics processing units (GPU’s) at the scale cloud computing giants such as Microsoft require said AMD CEO Lisa Su in an interview with Reuters.

In my opinion, it looks like AMD is ready to see if it can overtake Nvidia. It will be interesting to see if AMD can do that.


Mark Zuckerberg Argues That ‘Open Source AI’ Is The Path Forward



Meta posted “Expanding our open source large language models responsibly”. From the Meta blog:

Takeaways:

  • Meta is committed to openly accessible AI. Read Mark Zuckerberg’s letter detailing why open source is good for developers, good for Meta, and good for the world.
  • Open source has multiple benefits: It helps ensure that more people around the world can access the opportunities that AI provides, guards against concentrating power in the hands of a small few, and deploys technology more equitably. And we believe it will lead to more safe AI outcomes across society. That’s why we continue to advocate for making open access to the AI industry standard.
  • We’re bringing open intelligence to all by introducing Llama 3.1 collection of models, which expand context length to 128K, add support across eight languages, and include Llama 3.1 405B — the first frontier-level open source AI model.
  • As we improve the capabilities of our models, we’re also scaling our evaluations, red teaming, and mitigations, including for catastrophic risks.
  • We’re bolstering our system-level safety approach with new security and safety tools, which include Llama Guard 3 (an input and output multilingual moderation tool), Prompt Guard (a tool to protect against prompt injections), and CyberSecEval 3 (evaluations that help AI model and product developers understand and reduce generative AI cybersecurity risk). We’re also continuing to work with a global set of partners to create industry-wide standards that benefit the open source community.
  • We prioritize responsible AI development, and want to empower others to do the same. As part of our responsible release efforts, we’re giving developers new tools and resources to implement the best practices in our Responsible Use Guide.

ArsTechnica reported: In the AI world, there’s a buzz in the air about a new AI language model released Tuesday by Meta: Llama 3.1 405B. The reason? It’s potentially the first time anyone can download a GPT-4-class large language model (LLM) for free and run it on their own hardware.  

You’ll still need some beefy hardware: Meta says it can run on a “single sever node,” which isn’t desktop PC-grade equipment. But it’s a provocative shot across the bow of “closed” AI model vendors such as OpenAI and Anthropic.

“Llama 3.1 405B is the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation,” says Meta. Company CEO Mark Zuckerberg calls 405B “the first frontier-level open source AI model.”

The Register reported: First teased alongside the launch of its smaller eight- and 70-billion parameter siblings earlier this spring, Meta’s Llama 3.1 405B was trained on more than 15 trillion tokens — think of these a fragments of words, phrases, figures and punctuation — using 16,000 Nvidia H100 GPUs.

According to The Register, in total, the Facebook giant says training the 405-billion-parameter model required the equivalent of 30.84 million GPU hours and produced the equivalent of 11,390 tons of CO2 emissions.

In my opinion, I don’t think that large corporations should be using resources that humans need in order to feed an AI. This includes a huge amount of water, and also adds CO2 emissions into the air.


Apple, Nvidia Anthropic Used Thousands Of Swiped YouTube Videos To Train AI



Tech companies are turning to controversial tactics to feed their data-hungry artificial intelligence models, vacuuming up books, websites, photos, and social media posts often unbeknownst to the creators, WIRED reported.

AI companies are generally secretive about their sources of training data, but an investigation by Proof News found some of the wealthiest AI companies in the world have used material from thousands of YouTube videos to train AI. Companies did so despite YouTube’s rules against harvesting materials from the platform without permission.

One investigation found that subtitles from 173,536 YouTube videos, siphoned from more than 48,000 channels, were used by Silicon Valley heavyweights, including Anthropic, Nividia, Apple, and Salesforce.

The dataset, called YouTube Subtitles, contains video transcripts from educational and online learning channels like Khan Academy, MIT, and Harvard. The Wall Street Journal, NPR, and the BBC also had their videos used to train AI, as did The Late Show with Stephen Colbert, Last Week Tonight With John Oliver, and Jimmy Kimmel Live.

9TO5Mac reported a number of tech giants, including Apple, trained AI models on YouTube videos without the consent of the creators, according to a new report today. 

They did this by using subtitle files downloaded by a third party from more than 170,000 videos. Creators affected include tech reviewer Marquees Brownlee (MKBHD), MrBeast, PewDePie, Stephen Colbert, John Oliver, and Jimmy Kimmel.

The subtitle files are effectively transcripts of the video content.

The downloads were reportedly preformed by a non-profit called EleutherAI, which says it helps developers train AI models. While the aim appears to have been to provide training materials to small developers and academics, the dataset has also been used by several tech giants, including Apple.

According to 9to5Mac, it’s important to emphasize here that Apple didn’t download the data itself, but this was instead preformed by EleutherAI. It is this organization which appears to have broken YouTube’s terms and conditions.

The Verge reported as part of its investigation, Proof News also released an interactive lookup tool. You can use its search engine feature to see if your content— or your favorite’s YouTuber’s — appears in the dataset.

The subtitles dataset is part of a larger collection of material from the nonprofit EleutherAI called The Pile, an open-source collection that also contains datasets of books, Wikipedia articles, and more. Last year, an analysis of one dataset called Books3 revealed which authors work had been used to train AI systems, and the dataset has been cited in lawsuits by authors against the companies that used it to train AI.

In my opinion, scraping from content creator’s works – even if it’s only the audio part of the YouTube video – should be illegal. The work made by humans should not be fed to AI systems without the creator’s consent. 


‘Little Tech’ Brings A Big Flex To Sacramento



One of Silicon Valley’s heaviest hitters is wading into the fight over California’s AI regulations, Politico reported.

Y Combinator — the venture capitalist firm that brought us Airbnb, Dropbox, and DoorDash — today issued its opening salvo against a bill by state Sen. Scott Wiener that would require large AI models to undergo safety testing.

Weiner, a San Francisco Democrat whose district includes YC, says he’s proposing reasonable precautions for a powerful technology. But the tech leaders at Y Combinator disagree, and are joining a chorus of other companies and groups that say it will stifle California’s emerging marquee industry.

“This bill, as it stands, could gravely harm California’s ability to retain its AI talent and remain the location of choice for AI companies,” read the letter, which was signed by more than 140 AI startup founders.

It’s the first time the startup incubator, led by prominent SF tech denizen Garry Tan, has publicly weighted in on the bill. They argue it could hurt the many fledgling companies Y Combinator supports — about half of which are now AI-related.

Adam Thierer posted “Coalition Letter on California SB-1047, “The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act.” on R Street:

Dear Senator Wiener and members of the California State Legislature,

We, the undersigned organizations and individuals, are writing to express our serious concerns about SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. We believe that the bill, as currently written, would have severe unintended consequences that could stifle innovation, harm California’s economy, and undermine America’s global leadership on AI.

Our main concerns with SB 1047 are as follows: 

The application of the precautionary principle, codified as a “limited duty exemption,” would require developers to guarantee that their models cannot be missed for various harmful purposes, even before training begins. Given the general-purpose nature of AI technology, this is an unreasonable and impractical standard that could expose developers to criminal and civil liability for actions beyond their control.

The bill’s compliance requirements, including implementing safety guidance from multiple sources and paying fees to fund the Frontier Model Division, would be expensive and time consuming for may AI companies. This could drive businesses out of California and discourage new startups from forming. Given California’s current budget deficit and the state’s reliance upon capitol gains taxation, even a marginal shift of AI startups to other states could be deleterious to the state government’s fiscal position…

Y Combinator also posted a separate letter to Senator Wiener and two people who are on important committees. Here is a small piece from that letter:

Liability and regulation that is unusual in its burdens: The responsibility for the misuse of LLMs should rest with those who abuse these tools, not with the developers who create them. Developers cannot predict all possible applications of their models, and holding them liable for unintended misuse could stifle innovation and discourage investment in AI research. Furthermore, creating a penalty of purjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software – a standard product liability no other product in the world suffers from.

In my opinion, it appears that Y Combinator has concerns about California’s rules regarding safety in AI. I’m not sure why the company is so upset about the state requiring safety protocols in their AI.

 


Meta Pauses AI Models Launch In Europe Due To Irish Request



Meta Platforms will not launch its Meta AI models in Europe for now after the Irish privacy regulator told it to delay its plan to harness data from Facebook and Instagram users, the U.S. social media company said on Friday, Reuters reported.

The move by Meta came after complaints and a call by advocacy group NYOB to data protection authorities in Austria, Belgium, France, Germany, Greece, Italy, Ireland, the Netherlands, Norway, Poland and Spain to act against the company.

At issue is Meta’s plan to use personal data to train its artificial intelligence (AI) models without seeking consent, although the company has said it would use publicly available and licensed online information.

Meta on Friday said the Irish privacy watchdog had asked it to delay training its large language models (LLM’s) using public content shared by Facebook and Instagram adult users.

“We’re disappointed by the request from the Irish Data Protection Commission (DPC), our lead regulator, on behalf of the European DPAs … particularly since we incorporated regulatory feedback and the European DPAs have been informed since March,” the company said in an updated blogpost.

The Irish Data Protection Commission wrote:

The DPC’s Engagement with Meta On AI

The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA. This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on the issue.

The Verge reported Meta is putting plans for its AI assistant on hold in Europe after receiving objections from Ireland’s privacy regulator, the company announced on Friday.

In a blog post, Meta said the Irish Data Protection Commission (DPC) asked the company to delay training its large language models on content that had been publicly posted to Facebook and Instagram profiles.

Meta said it is “disappointed” by the request, “particularly since we incorporated regulatory feedback and the European [Data Protection Authorities] have been informed since March,” Per the Irish Independent. Meta had recently begun notifying European users that it would collect their data and offered an opt-out option in an attempt to comply with European privacy laws.

According to The Verge, Meta said it will “continue to work collaboratively with the DCP.” But its blog post says that Google and OpenAI have “already used data from Europeans to train AI” and claims that if regulators don’t let it use users’ information to train its models, Meta can only deliver an inferior product.

“Put simply, without including local information we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.”

In my opinion, I don’t think it should be legal for companies (like Meta and others) to scrape data off of social media platforms and feed it to their AI. It will never be ok to scrape other people’s posts – unless Meta pays a significant amount of money to the users they are stealing from.


Apple Intelligence Is The Company’s New Generative AI Offering



On Monday, at WWDC 2024, Apple unveiled Apple Intelligence, its long-awaited, ecosystem-wide push into generative AI. As earlier rumors suggested, the new feature is called Apple Intelligence (AI, get it?). The company promised the feature will be built with safety at its core, along with highly personalized experiences, TechCrunch reported.

According to TechCrunch, the company has been pushing the feature as integral to all of its various operating system offerings, including iOS, macOS, and the latest, visionOS.

The system is built on large language and intelligence models. Much of that processing is done locally according to the company, utilizes the latest version of Apple silicon. “Many of these models run entirely on device,” SVP Craig Federighi claimed during the event.

That said, these consumer systems still have limitations. As such, some of the heavy lifting needs to be done off device in the cloud. Apple is adding Private Cloud Compute to the offering. The back end uses services that run Apple chips, in a bid to increase privacy for this highly personal data.

Apple introduced Apple Intelligence. Here is part of the press release:

Apple today introduced Apple Intelligence, the personal intelligence system for iPhone, iPad, and Mac that combines the power of generative models with personal context to deliver intelligence that’s incredibly useful and relevant. 

Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia. It harnesses the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks. With Private Cloud Compute, Apple sets a new standard of privacy in AI, with the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers.

“We’re thrilled to introduce a new chapter in Apple innovation. Apple Intelligence will transform what users can do with our products — and what our products can do for our users,” said Tim Cook, Apple’s CEO. “Our unique approach combines generative AI with a user’s personal context to deliver truly helpful intelligence. And it can access that information in a completely private and secure way to help users do the things that matter most to them. This is AI as only Apple can deliver it, and we can’t wait for users to experience what it can do.”

Engadget reported Apple Intelligence will be powered by both Apple’s homegrown tech as well as a partnership with OpenAI, the maker of ChatGPT, Apple announced.

One of Apple’s biggest AI upgrades is coming to Siri. The company’s built-in voice assistant will now be powered by large language models, the tech that underlies all modern-day generative AI. Siri, which has languished over the years, may become more useful now that it can interact more closely with Apple’s operation systems and apps. 

Apple Intelligence will also use AI to record, transcribe, and summarize your phone calls, rivaling third-party transcription services like Otter. All participants are automatically notified when you start recording, and a transcript if the conversation’s main points is automatically generated at the end.

In my opinion, I’m not thrilled about any of the AI-Generated additions that have suddenly popped up. I’m hoping that Apple will allow me to turn off the AI-Generated stuff.


OpenAI, WSJ News Corp Strike Content Deal Valued At $250 Million



Wall Street Journal owner News Corp struck a major content-licensing pact with generative artificial-intelligence company OpenAI, aiming to cash in on a technology that promises to have a profound impact on the news-publishing industry, The Wall Street Journal reported.

The deal could be worth more than $250 million over five years, including compensation in the form of cash and credits for use of OpenAI technology, according to people familiar with the situation. The deal lets OpenAI use content from News Corp’s consumer-facing news publications, including archives, to answer users’ queries and train its technology.

“The pact acknowledges that there is a premium for premium journalism,” News Corp Chief Executive Robert Thomson said in a memo to employees Wednesday, “The digital age has been characterized by the dominance of distributors, often at the expense of creators, and many media companies have been swept away by a remorseless technological tide. The onus is now on us to make the most of this providential opportunity,”

The rise of generative AI tools such as OpenAI’s humanlike chatbot ChatGPT is poised to transform the publishing business. AI companies are hungry for publisher’s content, which can help them refine their models and create new products such as AI-powered search.

CNBC reported as part of the deal, OpenAI will be able to display content from News Corp-owned outlets with ChatGPT chatbot, in response to user questions. The startup will “enhance its products,” or, likely, to train its artificial intelligence models.

News Corp. will also “share journalistic expertise to help ensure the highest journalism standards are present across OpenAI’s offering” as part of the deal, according to a release.

“We believe a historic agreement will set new standards for veracity, for virtue, and for value in the digital age,” Robert Thomson, CEO of News Corp, said Wednesday in a release. “We are delighted to have found principled partners in Sam Altman and his trusty, talented team who understand the commercial and social significance of journalists and journalism.”

The Hollywood Reporter wrote OpenAI has cut another major media licensing deal. The artificial intelligence firm has inked a deal with News Corp, that will bring content from its stable of media outlets to ChatGPT and other OpenAI products.

“Through this partnership, OpenAI has permission to display content from News Corp mastheads in response to user questions and to enhance its products, with the ultimate objective of providing people the ability to make informed choices based on reliable information and news sources,” the companies said in the announcement.

The News Corp. properties The Wall Street Journal, Barron’s, MarketWatch, Investor’s Business Daily, FN, and New York Post; The Times, The Sunday Times, and The Sun; The Australian, news.com.au., The Daily Telegraph, The Courier Mail, The Advertiser, and Herald Sun are all part of the deal, terms of which were not disclosed.

In my opinion, it seems like many corporations have decided that AI-generated content is the best way to go. My concern is that large corporations will decide that OpenAI is better for their needs, and will begin layoffs of human employees.