Tag Archives: OpenAI

Microsoft And Apple Drop OpenAI Seats Amid Antitrust Scrutiny



Microsoft has given up its seat as an observer on the board of OpenAI while Apple will not take up a similar position, amid growing scrutiny by global regulators of Big Tech’s investments in AI start-ups, Financial Times reported.

Microsoft, which has invested $13bn in the maker of the generative AI chatbot ChatGPT, said in a letter to OpenAI that its withdrawal from its board role would be “effective immediately”.

Apple had also been expected to take an observer role on OpenAI’s board as part of a deal to integrate ChatGPT into the iPhone maker’s devices, but would not do so, according to a person with direct knowledge of the matter. Apple declined to comment.

OpenAI would instead host regular meetings with partners such as Microsoft and Apple and investors Thrive Capital and Khosla Ventures — part of “a new approach to informing and engaging key strategic partners” under Sarah Friar, the former Nextdoor boss who was hired as its first chief financial officer last month, an OpenAI spokesperson said.

The move comes as antitrust authorities in the EU and US examine the partnership between Microsoft and OpenAI as part of broader concerns about competition in the rapidly growing sector.

CNBC reported Microsoft said it would give up its observer seat on the OpenAI board amid regulatory scrutiny into generative artificial intelligence in Europe and the U.S. 

Microsoft’s deputy general counsel, Keith Tolliver, wrote a letter to OpenAI late Tuesday, saying that the position had provided insights into the board’s activities without compromising its independence.

But the letter, seen by CNBC, added that the seat was no longer needed as Microsoft had “witnessed significant progress from the newly formed board.” CNBC reached out to Microsoft and OpenAI for comment.

The European Commission previously said Microsoft could face an antitrust investigation, as it looked at the markets for virtual words and generative artificial intelligence.

The commission, which is the executive arm of EU, said in January that it is “looking into some of the agreements that have been concluded between large digital market players and generative AI developers and providers” and singled out the Microsoft-OpenAI tie-up as a particular deal that it will be studying.

9To5Mac reported: Just eight days after it was revealed that Apple Fellow Phil Schiller would join the OpenAI board as an observer, it’s now being reported that this won’t happen.

Instead, OpenAI will simply commit to regular meetings with Schiller and other partners and investors…

The change of plan appears to relate to antitrust concerns. Regulators in both the U.S. and Europe are already investigating Microsoft’s investment OpenAI, and it was possible that Apple could have opened itself up to a similar investigation by taking a seat on the board, even without voting powers.

In my opinion, OpenAI needs to rethink if they really want a board of people from larger corporations involved in what OpenAI does. Microsoft and Apple seem to not want to have a seat on the board.


IIya Sutskever, OpenAI’s Former Chief Scientist, Launches New AI Company



IIya Sutskever, one of OpenAI’s co-founders, has launched a new company, Safe Superintelligence Inc. (SSI), just one month after formerly leaving OpenAI, TechCrunch reported.

Sutskever, who was OpenAI’s longtime chief scientist, founded SSI with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.

At OpenAI, Sutskever was intregral to the company’s efforts to improve AI safety with the rise of AI systems “superintelligent”AI systems, an area he worked on alongside Jan Leake, who co-led OpenAI’s Superalignment team. Yet both Sutskever and then Leigh left the company dramatically in May after falling out with leadership at OpenAI over how to approach AI safety. Leike now heads a team at rival AI shop Anthropic.

Sutskiver has been shining a light on the thornier aspects of AI safety for a long time now. In a blog post published in 2023, Sutskever, writing with Leike, predicted that AI with intelligence exceeding that of humans could arrive within the decade — and that when it does, it won’t necessarily be benevolent, necessitating research into ways to control and restrict it.

Iyla Sutskiever, Daniel Gross, and Daniel Levy posted  the following on June 19, 2024:

Safe Superintelligence Inc.

Building safe super intelligence (SSI) is the most important technical problem of our time.

We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe super intelligence.

It is called Safe Superintelligence Inc.

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.

Now is the time. Join us.

The Verge reported: Last year, Sutskever led the push to oust OpenAI Sam Altman. Sutskever left OpenAI in May and hinted at the start of a new project. Shortly after Sutskever’s departure, AI researcher Jan Leike announced his resignation from OpenAI, citing safety processes that have “taken a backseat to shiny products.” Gretchen Krueger, a policy researcher at OpenAI, also mentioned safety concerns when announcing her departure.

In my opinion, it sounds like some of the people who were working on OpenAI have decided to create an entirely new company. Whether that decision was done out of frustration, or because they wanted to branch out on their own, it appears they are looking for other people to join them.


OpenAI Has A New Safety Team Led By Sam Altman



OpenAI is forming a new safety team, and it’s led by CEO Sam Altman, along with board member Adam D’Angelo and Nicole Seligman. The committee will make recommendations on critical safety and security decisions for OpenAI projects and operations” — a concern several key AI researchers shard when leaving the company this month, The Verge reported.

For its first task, the new team will “evaluate and further develop OpenAI’s processes and safeguards.” It will then present its findings to OpenAI’s board, which all three of the safety team’s leaders have a seat on. The board will then decide how to implement the safety team’s recommendations.

It’s formation follows the departure of OpenAI co-founder and chief scientist Ilya Sutskever, who supported the board’s attempted coup to dethrone Altman last year. He also co-led OpenAI’s Superalignment team, which was created to “steer and control AI systems much smarter than us.”

The OpenAI Board posted the following:

Today, the OpenAI Board formed a Safety and Security Committee led by director Bret Taylor (Chair), Adam D’Angelo, Nicole Selgiman, and Sam Altman (CEO). This committee will be responsible for making recommendations to the full Board on critical safety and security and security decisions for OpenAI projects and operations.

OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI. While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.

A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days. At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations wit hotel full board. Following the Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security….

OpenAI says it’s training the next frontier model, according to a press release on Tuesday, and anticipates it will bring the startup one step closer to artificial intelligence systems that are generally smarter than humans. The company also announced a new Safety and Security Committee to guide critical safety and security decisions, led by CEO Sam Altman and other OpenAI board members, Gizmodo reported.

“While we are proud to build and release models that are industry-leading on both capabilities and safety,” OpenAI said in a press release. “We welcome a robust debate at this important moment.”

The announcement follows a tumultuous month for OpenAI, where a group led by Ilya Sutskever and Jan Leake that researched AI risks existential to humanity was disbanded. Former OpenAI board members Helen Toner and Tasha McCauley wrote in The Economist on Sunday these developments and others following the return of Altman “bode ill for the OpenAI experiment in self-governance.” 

In my opinion, companies like OpenAI appear to be pushing boundaries to see what their AI can do. This, unfortunately, includes the company taking Scarlett Johansen’s voice without her permission. 


OpenAI, WSJ News Corp Strike Content Deal Valued At $250 Million



Wall Street Journal owner News Corp struck a major content-licensing pact with generative artificial-intelligence company OpenAI, aiming to cash in on a technology that promises to have a profound impact on the news-publishing industry, The Wall Street Journal reported.

The deal could be worth more than $250 million over five years, including compensation in the form of cash and credits for use of OpenAI technology, according to people familiar with the situation. The deal lets OpenAI use content from News Corp’s consumer-facing news publications, including archives, to answer users’ queries and train its technology.

“The pact acknowledges that there is a premium for premium journalism,” News Corp Chief Executive Robert Thomson said in a memo to employees Wednesday, “The digital age has been characterized by the dominance of distributors, often at the expense of creators, and many media companies have been swept away by a remorseless technological tide. The onus is now on us to make the most of this providential opportunity,”

The rise of generative AI tools such as OpenAI’s humanlike chatbot ChatGPT is poised to transform the publishing business. AI companies are hungry for publisher’s content, which can help them refine their models and create new products such as AI-powered search.

CNBC reported as part of the deal, OpenAI will be able to display content from News Corp-owned outlets with ChatGPT chatbot, in response to user questions. The startup will “enhance its products,” or, likely, to train its artificial intelligence models.

News Corp. will also “share journalistic expertise to help ensure the highest journalism standards are present across OpenAI’s offering” as part of the deal, according to a release.

“We believe a historic agreement will set new standards for veracity, for virtue, and for value in the digital age,” Robert Thomson, CEO of News Corp, said Wednesday in a release. “We are delighted to have found principled partners in Sam Altman and his trusty, talented team who understand the commercial and social significance of journalists and journalism.”

The Hollywood Reporter wrote OpenAI has cut another major media licensing deal. The artificial intelligence firm has inked a deal with News Corp, that will bring content from its stable of media outlets to ChatGPT and other OpenAI products.

“Through this partnership, OpenAI has permission to display content from News Corp mastheads in response to user questions and to enhance its products, with the ultimate objective of providing people the ability to make informed choices based on reliable information and news sources,” the companies said in the announcement.

The News Corp. properties The Wall Street Journal, Barron’s, MarketWatch, Investor’s Business Daily, FN, and New York Post; The Times, The Sunday Times, and The Sun; The Australian, news.com.au., The Daily Telegraph, The Courier Mail, The Advertiser, and Herald Sun are all part of the deal, terms of which were not disclosed.

In my opinion, it seems like many corporations have decided that AI-generated content is the best way to go. My concern is that large corporations will decide that OpenAI is better for their needs, and will begin layoffs of human employees.


Apple And OpenAI Are Preparing A Major Announcement At WWDC



Apple Inc. has faced its share of challenges during Tim Cook’s tenure, but none may be bigger than the one its contending with now: the need to come from behind and win in artificial intelligence, Bloomberg reported.

As chief executive officer, Cook successfully steered Apple away from a potential rut after the death of Steve Jobs in 2011. He navigated a trade war with China, proved the company could still pioneer new product categories an fought off smartphone rivals like Samsung Electronics Co. But the dawn of AI is his biggest test to date.

To give a sense of what Cook is up against, let’s compare this to a baseball game. Coming in, Apple had actually practiced longer than its rivals and has the home court. After all, it launched the Siri digital assistant in 2011, years before others got into the space. And yet, the game is now underway and Apple is already down 20 points.

Even if it’s only the game’s first quarter, staging a comeback is going to be difficult for Apple. Their competitors (OpenAI and Alphabet Inc.’s Google) have become AI superstars, and they’re only getting stronger as the contest goes on. Apple’s priority now is not getting smoked on its home floor.

AppleInsider reported a new report claims that Apple’s current schedule doesn’t include updates to its Mac Pro and Mac Studio machines until the middle of 2025.

The Mac Studio and Mac Pro got their most recent refreshes at the 2023 WWDC, which may be a bit if a wait for the next refresh. Bloomberg’s Mark Gurman reports that the two pro-level Macs will be skipped in 2024. If accurate, this would mean a two-year refresh cycle for Apple’s current highest-end machines.

Apple’s M3 chip debuted with Pro and Max versions out of the gate, at the October 2023 MacBook Pro-centric event. Other chips started at the base and worked their ways up to Pro, Max, and Ultra.

It took over a year for M1 to get an Ultra variant. The M1 Ultra debuted in the refreshed Mac Pro and then-new Mac Studio.

Apple’s M1 and M2 chips in Mac Pro and Mac Studio had clear interconnects, so a chip like the Ultra was a clear possibility relatively early. The M3 does not have this obvious interconnect, so it’s possible Apple had this road map in mind all along.

MacRumors reported Apple is poised to unveil an auto-summarization feature for notifications as part of a series of new artificial intelligence features in iOS 18, according to Bloomberg’s Mark Gurman.

“As part of the changes, the company will improve Siri’s voice capabilities, giving it a more conversational feel, and add features that help users with their day-to-day lives — an approach it calls “proactive intelligence.”

That includes services like auto-summarizing notifications from your iPhone, giving a quick synopsis of news articles and transcribing voice memos, as well as improving existing features that auto-populate your calendar and suggest apps. There will be some enhancements to photos in the form of AI-based editing, but none of those features will impress people who have used AI in Adobe Inc.’s apps for the last several months.”

In my opinion, it sounds like the upcoming AI features will make those who enjoy using artificial intelligence on their devices happy. WWDC begins on June 10, and I assume Apple will have more to say about this at the event.


OpenAI Putting ‘Shiny Products’ Above Safety, Says Departing Researcher



A former senior employee at OpenAI has said the company behind ChatGPT is prioritizing “shiny products” over safety, revealing that he quit after a disagreement over key aims reached a “breaking point” The Guardian reported.

Jan Leike was a key safety officer at OpenAI as its co-head of superalignment, ensuring that powerful artificial intelligence systems adhered to human values and aims. His intervention comes before a global artificial intelligence summit in Seoul next week, where politicians, experts and tech executives will discuss oversight of the technology.

Leike resigned days after the San Francisco-based company launched its latest AI model, GPT-4o. His departure meant two senior safety figures at OpenAI have left this week following the resignation of Ilya Sutskever, OpenAI’s co-founder and fellow co-head of superalignment.

Leike detailed the reasons for his departure in a thread on X posted on Friday, in which he said safety culture had become a lower priority.

“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.

OpenAI was founded with the goal of ensuring that artificial general intelligence, which it describes as “AI systems that are generally smarter than humans”, benefits all of humanity. In his X posts, Leake said he had been disagreeing with OpenAI’s leadership about the company’s priorities for some tome but that standoff had “finally reached a breaking point.”

Engadget reported in the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead.

OpenAI told Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets by Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company.

In a statement posted on X on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutksevar was not only one of the leads on the Superalignment team, but helped co-found the company as well.

CNBC reported OpenAI has disbanded its team focused on the long-term risk of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC on Friday.

OpenAI did not provide a comment and instead directed CNBC to co-founder and CeO Sam Altman’s recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do.

In my opinion, it sounds like some of those who worked for OpenAI are dissatisfied with how things are going. This appears to be why some have left. One cannot run a company simply by focusing on “shiny products.”


OpenAI Strikes Reddit Deal To Train Its AI On Your Posts



OpenAI has signed a deal for access to real-time content from Reddit’s data API, which means it can surface discussions from the site within ChatGPT and other new products. It’s an agreement similar to the one Reddit signed with Google earlier this year that was reportedly worth $60 million, The Verge reported. 

The deal will also “enable Reddit to bring new AI-powered features to Redditors and mods” and use OpenAI’s large language models to build applications. OpenAI has also signed up to become an advertising partner on Reddit.

Redditors have been vocal about how Reddit’s executives manage the platform before, and it remains to be seen how they’ll react to this announcement. More than 7,000 subreddits went dark in June 2023 after users protested Reddit’s changes to its API pricing. Recently, following news of a partnership between OpenAI and the programming message board Stack Overflow, people were suspended after trying to delete their posts.

No financial terms were revealed in the blog post announcing the arrangement, and neither company mentioned training data, either.

OpenAI posted: OpenAI and Reddit Partnership

Keeping the internet open is crucial, and part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online. Reddit is a uniquely large and vibrant community that has long been an important space for conversation on the internet. Additionally, using LLM’s ML, and AI allow Reddit to improve the user experience for everyone.

In line with this, Reddit and OpenAI today announced a partnership to benefit both the Reddit and OpenAI user communities in a number of ways:

  • OpenAI will bring enhanced Reddit content to ChatGPT and new products, helping users discover and engage with Reddit communities. To do so, OpenAI will access Reddit’s Data API, which provides real-time, structured, and unique content from Reddit. This will enable OpenAI’s tools to better understand and showcase Reddit content, especially on recent topics.
  • This partnership will also enable Reddit to bring new AI-powered features to redditors and mods. Reddit will be building on OpenAI’s platform of AI models to bring its powerful vision to life.
  • Lastly, OpenAI will become a Reddit advertising partner.

Engadget reported OpenAI and Reddit announced a partnership on Thursday that will allow OpenAI to surface Reddit discussions to bring AI-powered features to its users. 

The partnership will “enable OpenAI’s tools to better understand and showcase Reddit content, especially on recent topics” both companies said in a joint statement.

The deal is similar to the one that Reddit signed with Google in February, and which is reportedly worth $60 million. A Reddit spokesperson declined to disclose the terms of the OpenAI deal to Engadget and OpenAI did not respond for comment.

OpenAI has been increasingly striking partnerships with publishers to get data to continue training its AI models. In the last few weeks alone, the company signed deals with the Financial Times and Dotdash Meredith. Last year, it also partnered with German publisher Axel Springer to train its models on news from Politico and Business Insider in the US and Bild and Die Welt in Germany.

In my opinion, it seems like OpenAI wants to partner with every site that allows people to post things. That’s likely good for OpenAI, but I have concerns about the content created by users who may or may not want to have their posts used by OpenAI.