Tag Archives: AI

OnePlus AI Slides Onto Smartphones



OnePlus LogoWith the Pad 3 arriving next week, OnePlus have announced OnePlus AI which will be coming first to the….OnePlus 13s, the upcoming smaller handset that’s coming to India only. The rest of the world will have to wait until later in the year for the AI features to be pushed out to the rest of the 13-series.

Branded “Intelligently Yours”, the heart of the AI is Plus Mind, “a new feature designed to quickly save, catalog and recall key information found on the screen” before AI tools analyse the content for data. The way it works is this: when you find something that you want to keep, you can ask AI Plus Mind to save the on-screen information to a dedicated Mind Space for future reference and analysis. At face value, it doesn’t look terrible dissimilar from other scrapbook apps but bear in mind that it’s a stored copy rather than a link to another website.

Once the information has been captured, AI Plus Mind will review and analyse the content to understand the data. The press release mentions extracting schedule details and adding them to the calendar but hopefully it will be able to handle addresses and other semi-structured data. AI Search is integrated as well so you’ll be able to make natural language queries about the information stored in the Mind Space.

In addition to the core of AI Plus Mind, OnePlus is developing a suite of AI apps to assist with transcription and translation, plus photo editing and composition tools. AI Best Face will be able to create the perfect group photo, eliminating closed eyes and “sub-optimal expressions“. I think that’s people not smiling between you and me

That’s broadly the good news. The bad news is that the ubiquitous OnePlus Alert Slider is being replaced by the OnePlus Plus Key, a customisable button which can change sound profiles, take a picture, start a recording or….launch AI Plus Mind. I’ll reserve judgement until I see it in action but the whole benefit of the the Alert Slider was that it did one thing and did it well. You could discretely silence your phone without a whole rigmarole of turning the phone on, unlocking it, finding the sound profiles and choosing the quiet one. Couldn’t we have both?

Further, OnePlus is extending Gemini’s influence into the standard OnePlus apps such as Clock and Notes, improving the overall AI experience.

OnePlus is keen to highlight that as much processing as possible is done on the handset itself, but when additional resources are needed, the AI can engage the company’s Private Computing Cloud which provides a secure, encrypted environment for data processing.

Personally I’m still looking for uses for AI at a personal level, beyond removing strangers from my photos, so this sounds promising.


Generative AI Needs to Respect Artists



robot by TheDigitalArtist

Both the US and UK governments are considering the use of copyrighted material from the creative industries in training data for AIs, particularly Generative AI. The tech companies want to do this with few restrictions and without cost, citing “fair use”. As it stands at the moment, the UK Government looks to be leaning towards the tech companies but judges in the US are critical of the tech companies’ arguments. Even the EU, which normally stands up to the excesses of the tech firms, seems to be struggling.

As you can imagine, many authors and musicians aren’t happy about this and to be honest, I’m with the artists on this one. I do accept that often art builds upon the work of previous artists but most artists are generous with credit and rarely does anyone seek to pass off someone else’s work as their own. In the music industry, when an artist does re-use an element of another artist’s work, royalties are paid to the original creator and this is perhaps a model to consider.

The AI companies are attempting to avoid paying the creators but I imagine that they’ll inevitably have to do deals with music labels and publishers, but as an independent artist the difficulty is going to be proving that your work has been subsumed into the AI, and that’s why I agree with the efforts to force AI companies to disclose the training datasets. It’s far too opaque right now to know what’s inside the AI. Artists would then be able to claim any of their creations that had been used in the dataset for reward (or removal from the data set).

And let’s not pretend that the AI firms are doing this for the benefit of society. They’re doing it to make money (Google’s AI Ultra is $250 per month) and these are already some of the most valuable companies on the planet. It’s worth remembering too that technology companies have a pretty poor track record when it comes to privacy and other rights. They don’t have anyone’s interests at heart but their own. No-one needs “music in the style of The Rolling Stones”, “stories in the style of John Grisham or “pictures in the style of Basquiat”. It’s not like the AI is reviewing MRIs for cancerous cells or working for the greater good.

Ultimately, the companies behind generative AI need to be respectful of copyright holders. They need to both pay creators appropriately or honour those who don’t want their art included. Please do the right thing for once.


Gemini 2.0 Google’s Newest Flagship AI Can Generate Text, Images, And Speech



Google’s next major AI model has arrived to combat a slew of new offerings from OpenAI, TechCrunch reported.

On Wednesday, Google announced Gemini 2.0 Flash, which the company says can natively generate images and audio in addition to text. 2.0 Flash can also use third-party apps and services, allowing it to tap into Google Search, execute code, and more.

An experimental release of 2.0 Flash will be available through the Gemini API and Google’s AI developer platforms AI Studio and Vertex AI, starting today. However, the audio and image generation capabilities are launching only for “early access partners” ahead of a wide rollout in January.

In the coming months, Google says that it’ll bring 2.0 Flash in a range of flavors to products like Android Studio, Chrome, DevTools, Firebase, Gemini Code Assist, and others.

The production version of 2.0 Flash will land in January. But in the meantime, Google is releasing an API, the Multimodal Live API, to help developers build app with real-time audio and video streaming functionality.

Google https://blog.google/products/gemini/google-gemini-ai-collection-2024/ posted “Gemini 2.0: Our latest, most capable AI model yet.”

Today, we’re announcing Gemini 2.0 — our most capable AI model yet, designed for the agent era. Gemini 2.0 has new capabilities, like multimodal output with native image generation and audio output, and native tools including Google Search and Maps.

We’re releasing an experimental version of Gemini 2.0 Flash, our workhorse model with low latency and enhanced performance. Developers can start building with this model in the Gemini API via Google AI Studio and Vertex AI. And Gemini and Gemini Advanced users globally can try out a chat optimized version of Gemini 2.0 by selecting it in the model dropdown on desktop.

We’re also using Gemini 2.0 in new research prototypes including Project Astra, which explores the future capabilities of a universal AI assistant; Project Mariner, an early prototype capable of taking actions in Chrome as an experimental extension; and Jules, an experimental AI-powered code agent. 

We continue to prioritize safety and responsibility with these projects, which is why we’re taking an exploratory and gradual approach to development, including working with trusted testers.

Engadget reported: The battle for AI supremacy is heating up. Almost exactly a week after OpenAI made its o1 model available to the public, Google today is offering a preview of its next-generation Gemini 2.0 model. 

In a blog post attributed to Google CEO Sundar Pichai, the company says 2.0 is its most capable model yet, with the algorithm offering native support for image and audio output.

“It will enable us to build new AI agents that bring us closer to our vision of a universal assistant,” says Pichai. 

Google is doing something different with Gemini 2.0. Rather than starting today’s preview by first offering its most advanced version of the model, Gemini 2.0 Pro, the search giant is instead kicking things off with 2.0 Flash. 

As of today, the more efficient (and affordable) model is available to all Gemini users. If you want to try it yourself, you can enable Gemini 2.0 from the dropdown menu in the Gemini web client, with availability within the mobile app coming soon.

In my opinion, it will be interesting to see what Google wants to do with its Gemini 2.0. It seems like several other companies are working hard to create the best AI bot.

 


Bluesky Says It Has “No Intention” Of Taking User Content To Train AI Tools



Social network Bluesky, in a post on Friday, says that it has “no intention” of taking user content to train generative AI tools. It made the statement the same day X’s new terms of service that spell out how it can analyze user text and other information to train its generative AI tools go into effect, The Verge reported.

“A number of artists and creators have made their home on Bluesky, and we hear their concerns with other platforms training on their data,” Bluesky says in a post. “We do not use any of your content to train generative AI, and have no intention of doing so.

Other companies could still potentially scrape your Bluesky posts for training. Bluesky’s robots.txt doesn’t exclude crawlers from Google, OpenAI, or others, meaning those companies may crawl Bluesky data. 

“Bluesky is an open and public social network, much like website on the Internet itself,” spokesperson Emily Liu tells The Verge. “Just as robots.txt files don’t always prevent outside companies from crawling those sites, the same applies here. That said, we’d like to do our part to ensure that outside orgs respect user consent and are actively discussing within the team on how to achieve this.”

TechCrunch reported Bluesky, a social network that’s experiencing a surge in users this week as users abandon X, says it has “no intention” of using user consent to train AI generative tools. The social network made the announcement on the same day that X, (formerly Twitter) is implementing its new terms of service that allow the platform to use public posts to train AI.

“A number of artists and creators have made their home on Bluesky, and we here their concerns with other platforms training on their data,” Bluesky said in a post on its app. “We do not use any of your content to train generative AI, and have no intention of doing so.”

The company went on to note that it uses AI internally to help with moderation and that it also uses the technology in its “Discover”algorithmic feed. However, Bluesky says “none of these are Gen AI systems trained on user content.”

Bluesky has seen an increase in users following the U.S. presidential election as X gains more of a right-wing approach, especially after Musk used the platform to campaign for President-elect Donald Trump.

Engadget reported Bluesky, which has surged in the days following the US election, said on Friday that it won’t train on its user’s posts for generative AI. 

The declaration stands in stark contrast to the AI training policies of X (Twitter) and Meta’s Threads. Probably not coincidentally, Bluesky’s announcement came the same day X’s new terms of service, allowing third-party partners to train on user posts, went into effect.

Although Bluesky is still he underdog in a race on X and Threads, the platform has picked up steam after the U.S. election. It passed the 15 million user threshold on Wednesday after adding more than a million in the past week.

In my opinion, Bluesky is doing the right thing by encouraging users to ditch X  (formerly Twitter) and make an account on Bluesky.


OpenAI Launches ChatGPT Search Competing With Google Search



OpenAI on Thursday launched a search feature within ChatGPT, its viral chatbot, that positions the high-powered artificial intelligence startup to better compete with search engines like Google, Microsoft’s Bing and Perplexity, CNBC reported.

ChatGPT search offers up-to-the-minute sports scores, stock quotes, news, weather and more, powered by real-time web search and partnerships with news and data providers, according to the company. It began beta-testing the search engine, called SearchGPT, in July.

The release could have implications for Google as the dominant search engine. Since the launch of ChatGPT in November 2022, Alphabet investors have been concerned that OpenAI could take market share from Google in search by giving consumers new ways to seek information online.

Shares of Alphabet were down about 1% following the news.

The move also positions OpenAI as more of a competitor to Microsoft and it’s businesses. Microsoft has invested close to $14 billion in OpenAI, yet OpenAI’s products directly compete with Microsoft’s AI and search tools, such as Copilot and Bing.

The Verge reported ChatGPT is officially an AI-powered web search engine. The company is enabling real-time information in conversations for paid subscribers today (along with SearchGPT waitlist users), with free, enterprise, and education users gaining access in the coming weeks.

Rather than launching a separate product, web search will be integrated into ChatGPT’s existing interface. The feature determines when to tap into web results based on queries, through users can also manually trigger web searches. ChatGPT’s web search integration finally closes a key competitive gap with rivals like Microsoft Copilot and Google Gemini, which have long offered real-time internet access in their AI conversations.

The new search functionality will be available across all ChatGPT platforms: iOS, Android, and desktop apps for macOS and Windows. The search functionality was built with “a mix of search technologies,” including Microsoft’s Bing. The company wrote in a blog on Thursday that the underlying search model is a fine-tuned version of GPT-4o.

ArsTechnica reported: One of the biggest bummers about the modern internet has been the decline of Google Search. Once an essential part of using the web, it’s now a shadow of its former self, full of SEO-fueled junk and AI-generated spam.

On Thursday, OpenAI announced a new feature of ChatGPT that could potentially replace Google Search for some people: an upgraded web search capability for its AI assistant that provides answers with source attribution during conversations. The feature, officially called “ChatGPT with Search,” makes web search automatic based on user questions, with an option to manually trigger searches through a new web search icon.

In my opinion, it sounds like OpenAI is enabling users to more easily connect with ChatGPT in order to find the information they are looking for. It sounds like a competitor to Google Search.


The AI Bill Driving A Wedge Through Silicon Valley



California’s push to regulate artificial intelligence has riven Silicon Valley, as opponents warn the legal framework could undermine competition and the US’s position as the world leader in technology, Financial Times reported.

Having waged a fierce battle to amend or water down the bill as it passed through California’s legislature, executive at companies including OpenAI and Meta are waiting anxiously to see if Gavin Newsom, the state’s Democratic governor, will sign it into law. He has until September 30 to decide.

California is the heart of the burgeoning AI industry, and with no federal law to regulate the technology across the US — let alone a uniform global standard — the ramifications would extend far beyond the state.

Why does California want to regulate AI?

The rapid development of AI tools that can generate humanlike responses to questions have magnified perceived risks around the technology, ranging from legal disputes such as copyright infringement to misinformation and a proliferation of deepfakes. Some even think it could pose a threat to humanity.

The Verge reported artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery with celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.

SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far.

CCN reported a California bill that intends to promote the “safe and secure” development of frontier AI models has exposed a major rift in Silicon Valley.

Senior tech executives, prominent investors, and politicians on both sides of the aisle are among the bill’s critics. Meanwhile, supporters of SB-1047 include Elon Musk, Vitalik Buterin, and, most recently, an alliance of current and former employees of AI companies, including OpenAI, Google, Anthropic, Meta, and xAI.

Having made its way through the California legislature, only Governor Gavin Newsom’s approval is needed to sign SB-1047 into law. As the deadline for him to decide on his position approaches, Newsom has come under pressure from both sides.

In my opinion, there are going to be people who are all for Governor Newsom signing SB-1047, and people who don’t want the Governor to do that. Californian’s will have to wait and see what the outcome will be. 


AMD Buys AI Equipment Maker For Nearly $5 Billion



Advanced Micro Devices agreed to pay nearly $5 billion to ZT Systems, a designer of data-center equipment for cloud computing and artificial intelligence, bolstering the chip maker’s attack on Nvidia’s dominance in AI computation, The Wall Street Journal reported.

The deal, among AMD’s largest, is part of a push to offer a broader menu of chips, software and system designs to big data-center customers such as Microsoft and Facebook owner Meta Platforms, promising better performance through tight linkages between those products.

Secaucus, N.J.-based ZT Systems, which isn’t publicly traded, was founded in 1994. It designs and makes servers, server racks and other infrastructure that house and connect chips in the giant data centers that power artificial-intelligence systems such as ChatGPT.

AMD posted a press release titled: “AMD to Significantly Expand Data Centers AI Systems Capabilities with Acquisition of Hyperscale Solutions Provider ZD Systems”

Strategic acquisition to provide AMD with industry-leading systems to expertise to accelerate deployment of optimized rack-scale solutions addressing $400 billion data center AI accelerator opportunity in 2027 —

ZT Systems, a leading provider of AI and general purpose compute infrastructure for the world’s largest hyper scale providers, brings extensive AI systems expertise that complements AMD silicon and software capabilities.

Addition of world-class design and customer enablement teams to accelerate deployment of AMD AI rack scale system with cloud and enterprise customers.

AMD to seek strategic partner to acquire ZT System’s industry-leading manufacturing business.

Transaction expected to be accretive on a non-GAAP basis by the end of 2025…

Reuters reported AMD said on Monday it plans to acquire server maker ZT Systems for $4.9 billion as the company seeks to expand its portfolio of artificial intelligence chips and hardware and battle Nvidia.

AMD plans to pay for 75% of the ZT Systems acquisition with cash and the remainder in stock. The company had $5.34 billion in cash and short-term investments as of the second quarter.

The computing requirements for AI have dictated that tech companies string together thousands of chips in clusters to achieve the necessary amount of data crunching horsepower. Stringing together the vast numbers of chips has meant the makeup of whole server systems has become increasingly important, which is why AMD is acquiring ZT Systems.

The addition of ZT Systems engineers will allow AMD to more quickly test and roll out its latest AI graphics processing units (GPU’s) at the scale cloud computing giants such as Microsoft require said AMD CEO Lisa Su in an interview with Reuters.

In my opinion, it looks like AMD is ready to see if it can overtake Nvidia. It will be interesting to see if AMD can do that.