Category Archives: AI

OpenAI Launches ChatGPT Search Competing With Google Search



OpenAI on Thursday launched a search feature within ChatGPT, its viral chatbot, that positions the high-powered artificial intelligence startup to better compete with search engines like Google, Microsoft’s Bing and Perplexity, CNBC reported.

ChatGPT search offers up-to-the-minute sports scores, stock quotes, news, weather and more, powered by real-time web search and partnerships with news and data providers, according to the company. It began beta-testing the search engine, called SearchGPT, in July.

The release could have implications for Google as the dominant search engine. Since the launch of ChatGPT in November 2022, Alphabet investors have been concerned that OpenAI could take market share from Google in search by giving consumers new ways to seek information online.

Shares of Alphabet were down about 1% following the news.

The move also positions OpenAI as more of a competitor to Microsoft and it’s businesses. Microsoft has invested close to $14 billion in OpenAI, yet OpenAI’s products directly compete with Microsoft’s AI and search tools, such as Copilot and Bing.

The Verge reported ChatGPT is officially an AI-powered web search engine. The company is enabling real-time information in conversations for paid subscribers today (along with SearchGPT waitlist users), with free, enterprise, and education users gaining access in the coming weeks.

Rather than launching a separate product, web search will be integrated into ChatGPT’s existing interface. The feature determines when to tap into web results based on queries, through users can also manually trigger web searches. ChatGPT’s web search integration finally closes a key competitive gap with rivals like Microsoft Copilot and Google Gemini, which have long offered real-time internet access in their AI conversations.

The new search functionality will be available across all ChatGPT platforms: iOS, Android, and desktop apps for macOS and Windows. The search functionality was built with “a mix of search technologies,” including Microsoft’s Bing. The company wrote in a blog on Thursday that the underlying search model is a fine-tuned version of GPT-4o.

ArsTechnica reported: One of the biggest bummers about the modern internet has been the decline of Google Search. Once an essential part of using the web, it’s now a shadow of its former self, full of SEO-fueled junk and AI-generated spam.

On Thursday, OpenAI announced a new feature of ChatGPT that could potentially replace Google Search for some people: an upgraded web search capability for its AI assistant that provides answers with source attribution during conversations. The feature, officially called “ChatGPT with Search,” makes web search automatic based on user questions, with an option to manually trigger searches through a new web search icon.

In my opinion, it sounds like OpenAI is enabling users to more easily connect with ChatGPT in order to find the information they are looking for. It sounds like a competitor to Google Search.


Meta Builds AI Search Engine To Cut Google, Bing Reliance



Meta Platforms is working on an artificial intelligence-based search engine as it looks to reduce dependance on Alphabet’s Google and Microsoft’s Bing, the Information reported on Monday, Reuters reported.

The AI search engine segment is heating up with ChatGPT-maker OpenAI, Google and Microsoft all vying for dominance in the rapidly evolving market.

Meta’s web crawler will provide conversational answers to users about current events on Meta AI, the company’s chatbot on WhatsApp, Instagram, and Facebook, according to the report, which cited a person involved with the strategy.

The Facebook-owner currently relies on Google and Bing search engines to give users answers on news, stocks and sports.

Meta did not immediately respond to a Reuters request from comment.

Google is aggressively integrating its latest and most powerful AI model, Gemini, into core products like Search, aiming to deliver more conversational and intuitive search experiences.

Open AI relies on its largest investor, Microsoft, for web access to answer topical queries, using its Bing search engine.

Engadget reported: Stung from the hit it took from an Apple privacy feature three years ago, Meta is reportedly looking to decrease its dependence on Google and Microsoft. The Information said on Monday that Meta is developing a search engine for its chatbot. The company also partnered with Reuters to help its AI answer news-related questions.

Meta has reportedly been working on indexing the web for at least eight months. The company’s goal is said to be to integrate the indexes into Meta AI, giving the chatbot an alternative to Google Search and Microsoft Bing.

Meta publicly disclosed its web crawler tech this summer, only saying it was for “training AI models or improving products” without stating outright that it was building a search backend. Senior engineering manager Xueyuan Su is reportedly leading the search engine project.

Tom’s Guide reported: In an effort to stay competitive in AI development, Meta is reportedly creating its own search engine. The move, reported by The Information, is intended to reduce dependence on Google and Microsoft Bing, which currently feed Meta AI with news, sports and stocks. According to a Meta insider, this initiative could serve as a backup in case the partnerships with the tech giants shift.

Led by senior engineering manager Xueyuan Su, over the last year, Meta has been quietly advancing its web-crawling and indexing technologies by gathering web data into searchable indexes. The ultimate aim is for Meta AI to deliver live, conversational answers, enabling the platform to address user questions without reliance on any outside search platform.

In my opinion, it is good that Meta is trying to shed its reliance on Google and Microsoft Bing for research. That said, I am hoping that Meta does not choose to scrape data from its users.

 


OpenAI Plans To Release Its Next Big Model By December



OpenAI plans to launch Orion, its next frontier model, by December. The Verge reported.

Unlike the release of OpenAI’s last two models, GPT-4o and o1, Orion won’t initially be released widely through ChatGPT. Instead, OpenAI is planning to grant access first to companies it works closely with in order for them to build their own products and features, according to a source familiar with the plan.

Another source tells The Verge that engineers inside Microsoft — OpenAI’s main partner for deploying AI models — are preparing to host Orion on Azure as early as November. While Orion is seen inside OpenAI as the successor to GPT-4, it’s unclear if the company will call it GPT-5 externally. As always, the release plan is subject to change and could slip. Microsoft declined to comment for this story, and OpenAI initially declined.

After CEO Sam Altman called this story “fake news,” OpenAI spokesperson Niko Felix told The Verge that the company doesn’t “have plans to release a model code-named Orion this year” but that “we do plan to release a lot of other great technology.”

TechCrunch reported OpenAI says that it doesn’t intend to release an AI model code-named Orion this year, countering recent reporting on the company’s product roadmap.

“We don’t have plan to release a model code-named Orion this year,” a spokesperson told TechCrunch via mail. “We do plan to release a lot of other great technology.”

OpenAI previously told TechCrunch that The Verge’s report wasn’t accurate, but declined to elaborate further.

Orion, a step up from OpenAI’s current flagship, GPT-4o, is reportedly trained in part on synthetic training data from o1, the company’s “reasoning” model. Open AI plans for the foreseeable future to continue developing new “GPT” models alongside reasoning models like o1, which it sees as addressing fundamentally different use cases.

Open AI’s statement leaves substantial wiggle room. It could be that the company’s next major model isn’t, in fact, Orion. Or perhaps, OpenAI will release a new model by December, but one less capable than Orion.

TechRadar reported OpenAI CEO Sam Altman has slammed a media report about the imminent release of Orion, which is effectively ChatGPT-5, with a terse tweet on X.com. He described the report as “fake news out of control,” squashing rumors of a new version of ChatGPT before December.

The report by The Verge quotes ‘sources’ as claiming Microsoft is planning to host Orion, the successor to ChatGPT-4 on its servers in November, pointing towards a release for the new LLM in time for ChatGPT’s 2nd birthday next month.

Many users took this to mean a new product launch was imminent around the time of ChatGPT’s 2nd birthday (ChatGPT was released on 30 November 2022). Altman’s latest X.com post however would seem to remove any ambiguity around the event.

In my opinion, it appears that various news sites have either gotten their news regarding Orion from sources that are unnamed. As such, we will have to wait and see what actually happens.


The AI Bill Driving A Wedge Through Silicon Valley



California’s push to regulate artificial intelligence has riven Silicon Valley, as opponents warn the legal framework could undermine competition and the US’s position as the world leader in technology, Financial Times reported.

Having waged a fierce battle to amend or water down the bill as it passed through California’s legislature, executive at companies including OpenAI and Meta are waiting anxiously to see if Gavin Newsom, the state’s Democratic governor, will sign it into law. He has until September 30 to decide.

California is the heart of the burgeoning AI industry, and with no federal law to regulate the technology across the US — let alone a uniform global standard — the ramifications would extend far beyond the state.

Why does California want to regulate AI?

The rapid development of AI tools that can generate humanlike responses to questions have magnified perceived risks around the technology, ranging from legal disputes such as copyright infringement to misinformation and a proliferation of deepfakes. Some even think it could pose a threat to humanity.

The Verge reported artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery with celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.

SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far.

CCN reported a California bill that intends to promote the “safe and secure” development of frontier AI models has exposed a major rift in Silicon Valley.

Senior tech executives, prominent investors, and politicians on both sides of the aisle are among the bill’s critics. Meanwhile, supporters of SB-1047 include Elon Musk, Vitalik Buterin, and, most recently, an alliance of current and former employees of AI companies, including OpenAI, Google, Anthropic, Meta, and xAI.

Having made its way through the California legislature, only Governor Gavin Newsom’s approval is needed to sign SB-1047 into law. As the deadline for him to decide on his position approaches, Newsom has come under pressure from both sides.

In my opinion, there are going to be people who are all for Governor Newsom signing SB-1047, and people who don’t want the Governor to do that. Californian’s will have to wait and see what the outcome will be. 


AMD Buys AI Equipment Maker For Nearly $5 Billion



Advanced Micro Devices agreed to pay nearly $5 billion to ZT Systems, a designer of data-center equipment for cloud computing and artificial intelligence, bolstering the chip maker’s attack on Nvidia’s dominance in AI computation, The Wall Street Journal reported.

The deal, among AMD’s largest, is part of a push to offer a broader menu of chips, software and system designs to big data-center customers such as Microsoft and Facebook owner Meta Platforms, promising better performance through tight linkages between those products.

Secaucus, N.J.-based ZT Systems, which isn’t publicly traded, was founded in 1994. It designs and makes servers, server racks and other infrastructure that house and connect chips in the giant data centers that power artificial-intelligence systems such as ChatGPT.

AMD posted a press release titled: “AMD to Significantly Expand Data Centers AI Systems Capabilities with Acquisition of Hyperscale Solutions Provider ZD Systems”

Strategic acquisition to provide AMD with industry-leading systems to expertise to accelerate deployment of optimized rack-scale solutions addressing $400 billion data center AI accelerator opportunity in 2027 —

ZT Systems, a leading provider of AI and general purpose compute infrastructure for the world’s largest hyper scale providers, brings extensive AI systems expertise that complements AMD silicon and software capabilities.

Addition of world-class design and customer enablement teams to accelerate deployment of AMD AI rack scale system with cloud and enterprise customers.

AMD to seek strategic partner to acquire ZT System’s industry-leading manufacturing business.

Transaction expected to be accretive on a non-GAAP basis by the end of 2025…

Reuters reported AMD said on Monday it plans to acquire server maker ZT Systems for $4.9 billion as the company seeks to expand its portfolio of artificial intelligence chips and hardware and battle Nvidia.

AMD plans to pay for 75% of the ZT Systems acquisition with cash and the remainder in stock. The company had $5.34 billion in cash and short-term investments as of the second quarter.

The computing requirements for AI have dictated that tech companies string together thousands of chips in clusters to achieve the necessary amount of data crunching horsepower. Stringing together the vast numbers of chips has meant the makeup of whole server systems has become increasingly important, which is why AMD is acquiring ZT Systems.

The addition of ZT Systems engineers will allow AMD to more quickly test and roll out its latest AI graphics processing units (GPU’s) at the scale cloud computing giants such as Microsoft require said AMD CEO Lisa Su in an interview with Reuters.

In my opinion, it looks like AMD is ready to see if it can overtake Nvidia. It will be interesting to see if AMD can do that.


Perils of AI-Powered Search – How Businesses Will Be Screwed



The rapid evolution of AI in search engines like Google is poised to reshape how information is accessed—but at what cost to businesses?

The search landscape is changing at an unprecedented pace, driven by artificial intelligence (AI). Major players like Google Anthrophic and Bing by Microsoft are integrating AI-powered summaries at the top of search results, fundamentally transforming how users interact with information online. But while this innovation promises enhanced user experiences, it also presents imminent challenges for businesses and creators struggling to maintain visibility in an increasingly crowded digital space. As AI-powered answers dominate search results, companies face an existential crisis between the need for visibility and the risk of becoming obsolete.

AI in Search Results

The deployment of AI in search engines marks a pivotal shift in how information is presented to users. Features like AI Overviews are now capturing the prime real estate at the top of Google’s search results, providing quick snippets of information that eliminate the need for users to click through to actual websites. Google’s near-monopoly in the search engine market, recently ruled illegal, further amplifies its control over this evolving landscape. With more than 90% of global search traffic going through Google, the company’s decisions have far-reaching implications for businesses of all sizes.

The introduction of generative AI models is fundamentally altering search engine operations. These models generate answers based on vast datasets, bypassing traditional links to external websites. This shift reduces the visibility of individual businesses and drastically changes user interaction with search results.

Existential Threat

Businesses now face a stark choice: allow AI algorithms to mine their content or risk fading into obscurity in search results. This predicament poses an existential threat, as being excluded from AI-generated summaries could mean losing significant portions of web traffic. Joe Ragazzo of Talking Points Memo highlights this crisis, emphasizing content creators’ difficulty in staying relevant. His perspective underscores the broader industry concern about the diminishing returns from traditional SEO strategies.

AI-powered answers are reducing click-through rates, directly impacting businesses that rely on high visibility for traffic and revenue. Fewer clicks mean fewer opportunities for engagement, sales, and advertising revenue, posing a severe threat to the financial stability of many online enterprises.

Consequences for Content Creators

Blocking the Googlebot to protect content from being harvested by AI comes with its own risks. Doing so can result in a significant loss of traffic, leading to decreased visibility and engagement. Google’s $60 million deal with Reddit illustrates the high stakes involved in content licensing. This arrangement sets a precedent that smaller AI and search startups may find difficult to replicate, exacerbating the challenges faced by content creators.

Publishers like us here at Geek News Central understand the risks of opting out of AI-driven search visibility. We are damned if we do and damned if we don’t. The dilemma is apparent—remain accessible to AI and risk content appropriation, block AI and lose crucial web traffic.

The Future of Search and Business Survival

AI is undeniably changing the nature of search. Businesses must adapt to this evolving landscape or risk obsolescence. The future of online visibility hinges on understanding and leveraging these changes. To survive and thrive in this new era, businesses must diversify their traffic sources. Building direct relationships with their audience through podcasting, email marketing, social media engagement, and other channels becomes increasingly crucial.

But gaining new customers, listeners, etc., in this environment is going to be very, very difficult. The broader consequences of AI-powered search will result in the entire business ecosystem’s potential collapse, relying too heavily on Google and others for visibility. There are no clear answers. My solution for Geek News Central at this point is to go wide and continue to create original content that hopefully the LLM will consider a valuable source. But it remains unclear if we will survive and my external business can.

Google will always say buy your traffic with SEM, but that is just paying the Devil at this point. In an era where AI redefines search and business visibility rules, we can only guess at this point how bad it’s going to be, but it’s stuff like this that keeps me up at night.


OpenAI Launches GPT-4o Mini



OpenAI announced it will launch a new AI model, “GPT-4o mini,” the artificial intelligence startup’s latest effort to expand use of its popular chatbot, on Thursday, CNBC reported.

The company called the new release “the most capable and cost efficient small model available today,” and it plans to integrate image, video, and audio into it later.

The mini AI model is an offshoot of GPT-4o, OpenAI’s fastest and most powerful model, which it launched in May during a live-streamed event with executives. The “o” in GPT-4o stands for omni, and GPT-4o has improved audio, video and text capabilities, with the ability to handle 50 different languages at improved speed and quality, according to the company.

OpenAI posted: GPT-4o mini: advancing cost-efficient intelligence

OpenAI is committed to making intelligence as broadly accessible as possible. Today, we’re announcing GPT-4o mini, our most cost-efficient small model. We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable. 

GPT-4o mini scores 82% on MMLU and currently outperforms GPT-4(1) on chat preferences in LMSYS leaderboard. It is priced at 15 cents per million input tokens and 60 cents per million output tokens, an order of magnitude more affordable than previous frontier models and more than 60% cheaper than GPT-3.5 Turbo.

GPT-4o mini enables a broad range of tasks with its los cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots).

Today, GPT-4o mini supports text and vision in the API, with support for text, image, video and audio inputs and outputs coming in the future. The model has a context window of 128K tokens, supports up to 16K output tokens per request, and has knowledge up to October 2023. Thanks to the improved tokenizer shared with GPT-41 , handling non-English text is now even more cost effective…

TechCrunch  reported OpenAI introduced GPT-4o mini on Thursday, its latest small AI model. The company says GPT-4o mini, which is cheaper and faster than OpenAI’s current cutting edge AI models, is being released for developers, as well as through the ChatGPT web and mobile app for consumers starting today. Enterprise users will gain access next week.

The company said GPT-4o mini outperforms industry leading small AI models on reasoning tasks involving text and vision. As small AI models improve, they are becoming more popular for developers due to their speed and cost efficiencies compared to larger models such as GPT-4 Omni or Claude 3.5 Sonnet. They’re a useful option for high-volume simple tasks that developers might repeatedly call on an AI model to preform.

In my opinion, it might be better to use a smaller version of GPT-4o than to make larger ones that require things that humans need. More specifically, I have concerns about how much water gets used by AI systems.