Tag Archives: google

Google Hit With Lawsuit Over Alleged Stolen Data To Train AI Tools



Google was hit with a wide-ranging lawsuit on Tuesday alleging the tech giant scraped data from millions of users without their consent and violated copyright laws in order to train and develop its artificial intelligence products, CNN reported.

The proposed class action suit against Google, its parent company Alphabet, and Google’s AI subsidiary DeepMind was filed in federal court in California on Tuesday, and was brought by Clarkson Law Firm, CNN reported. The firm previously filed a similar smaller suit against ChatGPT-maker OpenAI last month.

The complaint alleges that Google “has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans” and using this data to train its AI products, such as its chatbot Bard. The complaint also claims Google has taken “virtually the entirely of our digital footprint,” including “creative and copywrite works” to build its AI products.

According to CNN, the complaint points to a recent update to Google’s privacy policy that explicitly states the company may use publicly accessible information to train its AI models and tools such as Bard.

The lawsuit comes as a new crop of AI tools have gained tremendous attention in recent months for their ability to generate work and images in response to user prompts. The large language models underpinning this new technology are able to do this by training on vast troves of online data.

The suit is seeking injunctive relief in the form of a temporary freeze on commercial access to and commercial development of Google’s generative AI tools like Bard. It is also seeking unspecified damages and payments as financial compensation to people whose data was allegedly misappropriated by Google. The firm says it has lined up eight plaintiffs, including a minor.

SlashGear reported that the news regarding Google comes only days after OpenAI was slapped with (another) lawsuit involving its models – in that case, the GPT-3.5 and GPT-4 upon which the ChatGPT name is based. Authors including comedian Sarah Silverman accused OpenAI – via the lawsuit – of violating their book copyrights by including them in training data without permission. Even more, that lawsuit suggested that OpenAI may have used illegal shadow libraries to source the books.

When big companies fight with lawsuits, there are many people indirectly swept up in the matter who don’t have the resources to individually challenge tech giants, SlashGear reported. It’s no surprise, then, that Google is facing a proposed class action suit that wants among other things, for the company to hit pause on providing commercial access to its AI models.

In my opinion, Google (and other big companies) have absolutely no right to steal content from creators, especially because the company does not ask for permission to use work that doesn’t belong to them, nor does it financially compensate the creators. This is why I have stopped posting my artwork publicly online.


Google Announced Hands-Free Photos For The Pixel Family



Google announced that their latest Feature Drop is here, and it’s jam-packed with helpful tools and updates for your Pixel Phone, Pixel Watch and Fitbit devices. These began rolling out yesterday, and will continue over the next few weeks.

For Pixel Phones

Peace of mind from Google Assistant: Use your voice to ask Google Assistant on your Pixel phone to start emergency sharing or to schedule a safety check for some extra piece of mind. If you’re out for a night run, just say “Hey Google, start a safety check for 30 minutes.” If you don’t respond to your safety check in the set duration, your emergency contacts will be notified and your real-time location will be shared.

Added safety on the road: Car crash detection on Pixels has helped keep drivers safe since launching in 2019, and now it can even keep loved ones in the loop if you’ve been in a severe crash. In addition to contacting emergency services, it can share your real-time location and call status with your emergency contacts.

Stunning videos, down to the smallest detail: Pixel Pro’s Focus is now available for video, so you can have larger-than-life videos of the smallest details, like butterflies fluttering or flowers waving in the wind.

Easier hands-free photos: Pixel 6 and newer phones will now let you take self-timed photos by simply raising your palm to trigger the timer after setting it for 3 or 10 seconds.

Express yourself with wallpapers that wow: Now on Pixel 6 and newer phones, you can bring your favorite memories of friends and family to life with Pixel’s new cinematic wallpapers. Pixel uses AI to transform your 2D wallpaper photos into dynamic 3D scenes for a truly magical look. And with new emoji wallpapers, you can also mix and match over 4,000 emoji with different patterns and colors to create live wallpapers that fit your personality.

Recorder speaker labels are even better: Recorder makes transcribing recordings a breeze with speaker labels. Starting next week, users with Pixel 6 and newer phones will be able to export transcripts into Google Docs, generate speaker-labeled video clips and search for speakers within recordings.

Quick access to smart home controls: Quickly access your favorite home devices right from your Pixel lock screen when using the Google Home app. Use the designated home panel to turn off lights, adjust the temperature, see your cameras and more.

Smarter haptics: For Pixel 6a and Pixel 7a, Pixel’s adaptive haptics can now lower its vibration intensity when it detects that it’s on a hard, flat surface like a desk of table.

Charging that adapts to your habits: Adaptive Charging now uses Google AI to help extend the lifespan of your Pixel battery. When you plug in your phone, it can predict a long charging session based on your previous charging habits, and slowly charge to 100% one hour before it’s expected to be unplugged.

New Google Assistant Voices: Google Assistant now sounds more natural and relatable to even more users with two new options to add to our diverse array of voices, totaling 12 in U.S. English.

In addition, Google has also introduced new features for Pixel Watches. For example, Pixel Watch will now be able to check your oxygen saturation (SpO2) and help you identify the changes in the level of oxygen in your blood while you are sleeping. There are also new features for Fitbit Devices

To me, it sounds like some of these features included in the Pixel family can make life easier for people with certain disabilities. Hands-free photos, an easy way to record speakers, and smart access to home controls are all a great place for Google Pixel to start with.


Google Cloud Partners With Mayo Clinic To Use AI In Health Care



Google’s cloud business is expanding its use of new artificial intelligence technologies in health care, giving medical professionals at Mayo Clinic the ability to quickly find patient information using the types of tools powering the latest chatbots, CNBC reported.

On Wednesday, Google Cloud said Mayo Clinic is testing a new service called Enterprise Search on Generative AI App Builder, which was introduced Tuesday. The tool effectively lets clients create their own chatbots using Google’s technology to scour mounds of disparate internal data.

In health care, CNBC reported, that means workers can interpret data such as a patients’ medical history, imaging records, genomics or labs more quickly and with a simple query, even if the information is stored across different formats and locations. Mayo Clinic, one of the top hospital systems in the U.S. with dozens of locations, is an early adopter of the technology of Google, which is trying to bolster the use of generative AI in the medical system.

Mayo Clinic will test out different use cases for the search tool in the coming months, and Vish Anantraman, chief technology officer at Mayo Clinic, said that it has already been “very fulfilling” for helping clinicians with administrative tasks that often contribute to burnout.

According to CNBC, generative AI has been the hottest topic in tech since late 2022, when Microsoft backed OpenAI released the chatbot ChatGPT to the public. Google raced to catch up, rolling out its Bard AI chat service earlier this year and pushing to embed the underlying technology into as many products as possible. Health care is a particularly challenging industry, because there’s less room for incorrect answers or hallucinations, which occur when AI models fabricate information entirely.

Recently, Google posted on The Prompt: “Let’s talk about recent AI missteps”. From the article:

…By now, most of us have heard about “hallucinations,” which are when a generative AI model outputs nonsense or invented information in response to a prompt. You’ve probably also heard about companies accidentally exposing proprietary information to AI assistance without first verifying that interactions won’t be used to further train models. This oversight could potentially expose private information to anyone in the world using the assistance, as we discussed in earlier editions of “The Prompt”…

Google also wrote a blog post titled: “Bringing Generative AI to search experiences”. From the article:

…For example, building search by breaking long documents into chunks and feeding each segment into an AI assistant typically isn’t scalable and doesn’t effectively provide insights across multiple sources. Likewise, many solutions are limited in the data types they can handle, prone to errors, and susceptible to data leakage…. Even when organizations make these efforts, the resulting solutions tend to lack feature completeness and reliability, with significant investments of time and resources required to achieve high quality results…

Google also points out that their Gen App Builder lets developers create search engines that help ground outputs in specific data sources for accuracy and relevance, can handle multimodal data such as images, and include controls over how answer summaries are generated. Google also indicates that multi-turn conversations are supported so that users can ask follow up questions as they peruse outputs, and customers have control over their data – including the ability to support HIPAA compliance for healthcare cases.

Personally, I would prefer to talk to an actual human being about whatever questions I might have about my health care needs. Giving this over to an generative AI, that could easily make mistakes or have “hallucinations”, sounds like a gimmick that could potentially cause harm to patients.


Google Is Updating Its Inactive Account Policies



Google’s VP, Product Management, Ruth Kricheli, posted on The Keyword about “Updating our inactive account policies” From the post:

People want the products and services they use online to be safe and secure. Which is why we have invested in technology and tools to protect our users from security threats, like spam, phishing scams, and account hijacking.

Even with these protections, if an account hasn’t been used for an extended period of time, it is more likely to be compromised. This is because forgotten or unattended accounts often rely on old or re-used passwords that may have been compromised, haven’t had two factor authentication set up, and receive fewer security checks by the user.

Our internal analysis shows abandoned accounts are 10x less likely than active accounts to have 2-step-verification set up. Meaning, these accounts are often vulnerable, and once an account is compromised, it can be used for anything from identity theft to a vector for unwanted or even malicious content, like spam.

To reduce this risk we are updating our inactivity policy for Google Accounts to 2 years across our products. Starting later this year, if a Google Account has not been used or signed into for at least 2 years, we may delete the account and its contents – including content within Google Workspace (Gmail, Docs, Drive, Meet, Calendar), YouTube and Google Photos.

The policy only applies to personal Google Accounts, and will not effect accounts for organizations like schools or businesses. This update aligns with our policy with industry standards around retention and account deletion and also limits the amount of time Google retains your unused personal information.

The blog post provided the following information:

While the policy takes effect today, it will not immediately impact users with an inactive account – the earliest we will begin deleting accounts is December 2023.

We will take a phased approach, starting with accounts that were created and never used again.

Before deleting an account, we will send multiple notifications over the months leading up to deletion, to both the account email address and the recovery email (if one has been provided).

Google says the simplest way to keep a Google Account active is to sign-in at least once every 2 years. If you have signed into your Google Account or any of our services recently, your account is considered active and will not be deleted.

Things you can do to keep your account active include: Reading or sending an email; Using Google Drive; Watching a YouTube video; Downloading an app on the Google Play Store; Using Google Search; Using Sign in with Google to sign in to a third-party app or service.

Personally, I put two-factor authentication on everything I can. It makes it much harder for some random person to hack into your accounts and take them over.

As far as I can tell, the post on Google’s Keyword blog is the only place where this information has been placed. I think there will be people who never look at The Keyword, and who may be unfortunately surprised when Google decides to delete their account (in December of 2023).


YouTube Tests Blocking Videos Unless You Disable Ad Blockers



YouTube is running an experiment asking some users to disable their ad blockers or pay for a premium subscription, or they will not be allowed to watch videos, BleepingComputer reported.

As first spotted by a Reddit user this week,YouTube will display a pop-up warning some users that “ad blockers are not allowed”. “It looks like you may be using an ad blocker. Ads allow YouTube to stay free for billions of users worldwide,” the message adds.

Upon receiving this notification, users will have two options: either disable their ad blocker to allow YouTube ads or consider subscribing to YouTube Premium to get rid of all advertisements. As explained in the pop-up, “you can go ad-free with YouTube Premium, and creators can still get paid from your subscription.”

A YouTube spokesperson confirmed this experiment and said the company urges viewers to try YouTube Premium or allow ads on the platform.

“We’re running a small experiment globally that urges viewers with ad blockers enabled to allow ads on YouTube or try YouTube Premium,” the spokesperson told BleepingComputer. “Ad blocker detection is not new, and other publishers regularly ask viewers to disable ad blockers.”

IGN reported that Google has announced that it has begun experimenting with a feature that blocks users who have an ad blocker enabled on YouTube.

The initiative was first pointed out by Redditor Sazk100, who posted a screenshot a few days ago which mentions that ad blockers are no longer allowed on YouTube. The pop-up mentions that those that are using an ad blocker will not be allowed to watch videos on the platform unless they enable ads on YouTube or subscribe to YouTube Premium, which includes access to original programming on the platform and the ability to download videos while removing ads.

A YouTube employee also confirmed to a moderator on the YouTube subreddit that the feature is just an “experiment” that the team is currently working on.

According to IGN, Google testing out a feature that curbs ad blockers on YouTube should come as no surprise, especially with YouTube ads becoming more intrusive in recent years. Last year, the company did an experiment that forced users to watch a long chain of short and unstoppable ads.

Elsewhere creators have seen a steady decline in ad revenue, beginning with 2017’s infamous “Adpocalypse.” Many have turned to Patreon and other means in order to make up the shortfalls.

9To5 Google reported that a YouTube employee has since confirmed to the r/YouTube moderation team that, for now, this is just an “experiment”. For now, YouTube is only testing blocking ad blockers.

Really, it’s easy to see why YouTube might enact such a rule, 9to5 Google reported. Ad blockers strip away income generated from videos which pays for the ever-increasing storage and bandwidth needs of that content. But, at the same time, user frustration is also pretty clear. YouTube has been escalating its ad load tremendously in recent years, and YouTube Premium isn’t particularly affordable for occasional viewers at $10/month.

As for me, I started paying for YouTube Premium a while ago, because I find ads to be extremely intrusive and annoying. The subscription costs $10 a month, which feels like a reasonable amount for someone like me who posts a lot of content on YouTube. However, I also understand most people don’t use YouTube the way I do.


Where’s the Bot?



Wendy’s is automating its drive-through service using an artificial-intelligence chatbot powered by natural language software developed by Google and trained to understand the myriad ways customers order off the menu, The Wall Street Journal reported.

The Dublin, Ohio-based fast-food chain’s chatbot will be officially rolled out in June at a company-owned restaurant in Columbus, Ohio, Wendy’s said. The goal is to streamline the ordering process and prevent long lines in the drive-through lanes from turning customers away, said Wendy’s Chief Executive Todd Penegor.

According to the Wall Street Journal, Wendy’s didn’t disclose the cost of the initiative beyond saying the company has been working with Google in areas like data analytics, machine learning and cloud tools since 2021.

“It will be very controversial,” Mr. Penegor said about the new artificial intelligence-powered chatbots. “You won’t know you’re talking to anybody but an employee,” he said.

To do that, Wendy’s software engineers have been working with Google to build and fine-tune a generative AI application on top of Google’s own large language model, or LLM – a vast algorithmic software tool loaded with words, phrases and popular expressions in different dialects and accents and designed to recognize and mimic the syntax and semantics of human speech.

Gizmodo reported: AI chatbots have come for journalism, and now they are coming for our burgers. Wendy’s is reportedly gearing up to unveil a chatbot-powered drive-thru experience next month, with the help from a partnership with Google.

“Google Cloud’s generative AI technology creates a huge opportunity for us to deliver a truly differentiated, faster and frictionless experience for our customers, and allows our employees to continue focusing on making great food and building relationships with fans that keep them coming back time and again,” said Wendy’s CEO Todd Penegor in a statement emailed to Gizmodo.

According to Gizmodo, Wendy’s competitor McDonald’s has already been experimenting with ah AI drive-thru – to mixed results. Videos posted to TikTok illustrated just how woefully ill-prepared automation is at taking fast food orders, and how woefully un-prepared humans are to deal with it.

McDonalds began testing AI drive-thrus as June 2021 with 10 locations in Chicago. McDonald’s CEO Chris Kempczinski reportedly explained that the AI system had an 85% order accuracy. However, according to Restaurant Drive in June 2022, the company was seen an accuracy percentage in the low-80s when it was really hoping for 95% accuracy before a wider rollout.

The Register posted a title for its article that started with “Show us the sauce code…” also reported that Wendy’s and Google have together built a chatbot for taking drive-thru orders, using large language models and generative AI

According to The Register, the system works by converting spoken fast-food orders to text that can be processed by Google’s large language model. A generative component added to the system is designed to make the chatbot interact with people in a more natural and conversational manner, so that it’s less rigid and robotic.

The completed model was trained to recognize specific phrases or acronyms customers typically use when ordering, such as “JBC” describing Wendy’s junior bacon cheeseburger, “Frosties” milkshakes, or its combination meal “biggie bags.” Unsurprisingly, The Register reported, the chatbot, like human workers, will gladly offer to upsize meals or add more items to an order since it has been programmed to try and persuade hungry patrons to spend more cash.

The Register also reported that Wendy’s will try out its AI-powered drive-thru service in June at a restaurant in Columbus, Ohio. Up to 80 percent of orders are reportedly placed by customers at the burger slinger’s drive-thru lanes, an increase of 30 percent since the COVID-19 pandemic.

Personally, I’m of two minds about this. On the one hand, if the AI turns out to be really good at what it does, it could make the drive thru lines move faster. People don’t have to wait as long, and Wendy’s gets more money.

On the other hand, I have concerns that if the AI – eventually – will be used in every Wendy’s. That could result in less job opportunities for real-life, human, workers.


Android Developers Blog Gives Users Control Of Their Data



The Android Developers Blog posted “Giving Users More Transparency and Control Over Account Data”. It was posted by Bethel Otuteye, Senior Director, Product Management, Android App Safety. From the blog post:

Google Play has launched a number of recent initiatives to help developers build consumer trust by showcasing their apps’ privacy and security practices in a way that is simple and easy to understand. Today we’re building on this work with a new data deletion policy that aims to empower users with greater clarity and control over their in-app data.

For apps that enable app account creation, developers will soon need to provide an option to initiate account and data deletion from within the app and online. This web requirement, which you will link in your Data Safety Form, is especially important so that a user can request account and data deletion without having to reinstall an app.

While Play’s Data safety section already lets developers highlight their data deletion options, we know that users want an easier and more consistent way to request them. By creating a more intuitive experience with this policy. We hope to better educate our shared users on the data controls available to them and create greater trust in your apps and Google Play more broadly.

As the new policy states, when you fulfill a request to delete an account, you must also delete the data associated with that account. The feature also gives developers a way to provide more choice: users who may not want to delete their account entirely can choose to delete other data only where applicable (such as activity history, images, or videos). For developers that need to retain certain data for legitimate reasons such as security, fraud prevention, or regulatory compliance, you must clearly disclose those data retention practices…

…As a first step we’re asking developers to submit answers to new Data deletion questions in your app’s Data Safety form by December 7. Early next year, Google Play users will begin to see reflected changes in your app’s store listing, including the refreshed data deletion badge in the Data safety section and the new Data deletion area.

9to5 Google reported that Google specifies that Play developers must “delete the user data associated with that app account”. 

Temporary account deactivation, disabling, or “freezing” the app account does not qualify as account deletion. If you need to retain certain data for legitimate reasons such as security, fraud protection, or regulatory compliance, you must clearly inform users about your data retention practices (for example, within your privacy policy.

TechCrunch reported that Google announced a new account deletion policy for Android apps, which means that apps that offer account creation must have an easy way to delete the account as well. 

According to TechCrunch, the company said it would start enforcing this policy sometime early next year. This move follows Apple, which implemented a similar policy on June 30, 2022, for apps on the App Store.

Personally, I think that requesting that your data be removed from an app is an excellent idea. This is especially important for people who have decided they no longer want to use a particular app.