All posts by JenThorpe

Trudeau Announces $2.4 billion For AI-Related Investments



The Liberal government is setting aside $2.4 billion in its upcoming budget to build capacity in artificial intelligence, Prime Minister Justin Trudeau announced Sunday, CBS reported.

The bulk of that — $2 billion — is going to a fund that aim to provide access to computing capabilities and technical infrastructure.

He said the federal government will begin consulting with industry soon on a new AI Compute Access Fund and an accompanying strategy to expand the sector in Canada.

“We want to help companies adopt AI in a way that will have positive impacts for everyone,” Trudeau said, adding that $200 million will go towards boosting the adoption of AI sectors like agriculture, health care, and clean technology.”

The government plans to launch a $50-million AI institute to protect against what it calls “advanced or nefarious AI systems,” and another $5.1 million will go toward an office of AI and Data Commissioner to enforce the proposed Artificial Intelligence and Data Act.

Prime Minister of Canada Justin Trudeau posted “Securing Canada’s AI advantage” Here are some key points from it:

Investing $2 billion to build and provide access to computing capabilities and technological infrastructure for Canada’s world-leading AI researchers, start-ups, and scale-ups: As part of this investment, we will soon be consulting with AI stakeholders to inform the launch of a new AI Compute Access Fund to provide near-term support to researchers and industry. We will also develop a new Canadian AI Sovereign Compute Strategy to catalyze the development of Canadian-owned and located AI infrastructure. Ensuring access to cutting-edge computing infrastructure will attract more global AI investment to Canada, develop and recruit the best talent, and help Canadian businesses compete and succeed on the world stage.

Boosting AI start-ups to bring new technologies to market, and accelerating AI adoption in critical sectors, such as agriculture, clean technology, health care, and manufacturing, with $200 million in support through Canada’s Regional Development Agencies.

Investing $100 million in the NCP IRAP AI Assist Program to help small and medium-sized businesses scale up and increase productivity by building and deploying new AI solutions. This will help companies incorporate AI into their businesses and take on research, product development, testing, and validation work for new AI-based solutions.

Supporting workers who may be impacted by AI, such as creative industries, with $50 million for the Sectoral Workforce Solutions Program, which will provide new skills training for workers in potentially disrupted sectors and communities.

Creating new Canadian AI Safety Institute, with $50 million to further the safe development and deployment of AI. The Institute, which will leverage input from stakeholders and work in coordination with international partners, will help Canada better understand and protect against the risks of advanced or nefarious AI systems, including to specific communities.

Strengthening enforcement of the Artificial Intelligence and Data Act, with $5.1 million for the Office of the AI and Data Commissioner. The proposed Act aims to guide AI innovation in a positive direction to help ensure Canadians are protected from potential risks ensuring the responsible adoption of AI by Canadian businesses.

In my opinion, it sounds like the Canadian Government has put a lot of thought about what they want to have in their AI programs. Prime Minister Trudeau appears to have a high opinion of what AI can do for Canada.

 


Fake Facebook MidJourney AI Page Promoted Malware To 1.2 Million People



Hackers are using Facebook advertisements and hijacked pages to promote fake Artificial Intelligence services, such as MidJourney, OpenAI’s SORA and ChatGPT-5, and DALL-E, to infect unsuspecting users with password-stealing malware, Bleeping Computer reported.

The malvertising campaigns are created by hijacked Facebook profiles that impersonate popular AI services, pretending to offer a sneak peak of new features.

Users tricked by the ads become members of fraudulent Facebook communities, where the threat actors post news, AI-generated images, and other related info to make pages look legitimate.

However, the community posts often promote limited-time access to upcoming and eagerly anticipated AI-services, tricking the users into the download malicious executables that infect Windows computers with information-stealing malware like Rilide, Vidar, IceRAT, and Nova.

Information-stealing malware focuses on stealing data form a victim’s browser, including stored credentials, cookies, cryptocurrency wallet information, autocomplete data, and credit card information.

The Record reported cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware.

According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT-5, and others.

The campaigns follow a certain blueprint. Cybercriminals take over a Facebook account and begin to make changes to the page’s descriptions, cover and profile photo. According to Bitdefender, they make “the page seem as if it is run by well-known AI-based image and video generators.”

They then populate the pages with purported product news and advertisements for software, which are themselves generated with AI software.

The downloads contain various types of info steeling malware – like Riide, Vidar, IceRAT, and Nova Stealers — which are available for purchase on the dark web, allowing unsophisticated cybercriminals to launch attacks.

According to The Record, the most notable Facebook page hijack involved the application Midjourney, a popular tool for creating AI-generated images. Its hijacked page had 1.2 million followers and was active for nearly a year before it was shut down earlier this month.

Tom’s Guide reported once an account is compromised, the hackers then give it an AI-themed makeover with a new cover and profile photos as well as descriptions to make it appear as if it is run by one of the well-known AI-generated photos and advertisements to further impersonate whichever AI image generator of video generate service they want to leverage in their attacks.

During their investigation, Bitedefender’s security researchers found that the hackers responsible used a much different approach with MidJourney. For other AI tools, they urged visitors to download the latest versions from Dropbox or Google Drive, but with Midjourney, they created more than a dozen malicious sites that impersonated the tool’s actual landing page. These sites then tried to trick visitors into downloading the latest version of the took via a GoFile link.

In my opinion, the cybercriminals are obviously terrible people who want to take advantage of others. I’m hoping that Facebook has taken swift action against the crooks who likely caused harm to several Facebook users.


Meta Announces Approach To Labeling AI-Generated Content



Monika Bickert, Vice President of Content Policy at Meta posted information regarding their approach to labeling AI-generated content and manipulated media. 

We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on feedback from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels. These changes are also informed by Meta’s policy review process that included extensive public opinion surveys and consultations with academics, civil society organizations, and others.

We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. 

In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving, As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommends a “less restrictive” approach to manipulated media like labels with context. 

In February, we announced we’ve been working with industry partners on common technical standards for identifying AI content, including video and audio. Our “Made with AI” labels on AI-generated video, audio, and image will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We already add “Imagined with AI” to photorealistic images created using our Meta AI features.

TechCrunch reported Meta announced changes to its rules on AI-generated content and manipulated media, following criticism from its Oversight Board. Starting next month it said it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes (aka synthetic media). Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.

According to TechCrunch, the move could lead the social networking giant labeling more pieces of content that have the potential to be misleading — a step that could be important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI-generated content.

AI generated content that falls outside those bounds will, presumably, escape unlabeled.

ArsTechnica reported Meta announced policy updates to stop censoring harmless AI-generated content and instead begin “labeling a wider range of audio, video, and image content as ‘Made with AI.”

Previously, Meta would only remove “videos that are created or altered by AI to make a person appear to say something they didn’t say,” The Oversight Board warned that this policy failed to address other manipulated media, including “cheap fakes,” manipulated audio, or content showing people doing things they’d never done.

In my opinion, it is a good idea for Meta to start adding “Made with AI” labels to connect that was detected as AI-generated. Doing so might reduce confusion on Meta’s sites. 


Blue Check Marks Reappear On Some Large X Accounts



Almost a year since blue check marks disappeared from some large, influential accounts on X (formerly Twitter), they have started to return, NBC News reported. On Wednesday, some individuals with large followings started receiving notifications that they had received complementary premium features, including the return of the iconic blue check mark to their accounts.

In late March, platform owner Elon Musk posted that accounts with more than 2,500 verified subscriber followers would get X Premium features for free, and accounts with more than 5,000 verified subscriber followers will get X Premium+ features. One of those features is the blue check mark icon that previously served as an indicator of real accounts belonging to celebrities, journalists, influencers and other public figures.

In April 2023, as part of Musk’s takeover of the company, check marks were pulled from “legacy” verified accounts — which had been verified through a tightly controlled process meant to designate accounts as “notable.” Under Musk, verification became a paid feature for members of X Premium, in which virtually anyone could enroll. Politicians were given gray check marks and organizations could pay for gold check marks. At the same time, Musk said he gifted some premium features to some influential accounts, like author Stephen King, who posted that he had received the blue check against his will.

The Verge reported just as Elon Musk said, X is doling out the free Premium and Premium Plus memberships to accounts with a high number of verified followers. 

Multiple X users on Wednesday reported seeing the familiar blue “Verified” checkmark next to their handles despite not paying for either paid X subscription tier. Musk last week announced that X accounts with over 2,500 “verified subscriber followers” would receive a free Premium membership; while accounts with over 5,000 would receive a free Premium Plus membership.

Now, it appears that many influential X accounts with already large followings in the tens of thousands (which may translate to verified followings that cross the benchmark) are once again checkmarked, or will be, whether they like it or not.

TechCrunch reported X is giving free blue checks to users who have more than 2,500 “verified” followers, which are people who subscribe to X Premium. Popular posters will get a blue check, but not everyone is happy about it: People are now frantically posting to make it clear that they didn’t buy a blue check, but rather the blue check was foisted upon them.

According to TechCrunch, back in the days of yore, Twitter’s blue check indicated that a user was influential in some way. Back then, blue checks actually helped us determine public figures are who they say they are. So if someone was popular on Twitter, perhaps because they’re a celebrity, an influencer or a journalist, they would get a blue check, which could also help reduce the spread of misinformation.

In my opinion, it appears that some large X/Twitter accounts, who have received blue checkmarks, are not entirely thrilled about having it. That said, it might have at least one use – making it clear that the checkmark prevents other people from pretending to be a celebrity.


FCC To Vote To Restore Net Neutrality Rules, Reversing Trump



The U.S. Federal Communications Commission will vote to reinstate landmark net neutrality rules and assume new regulatory oversight of broadband internet that was rescinded under former President Donald Trump, the agency’s chair said.

According to Reuters, the FCC told advocates on Tuesday of the plan to vote on the final rule at its April 25 meeting. The commission voted 3-2 in October on the proposal to reinstate open internet rules and adopted in 2015 and re-establish the commission’s authority over broadband internet.

Net neutrality refers to the principle that internet providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites.

FCC Chair Jessica Rosenworcel confirmed the planned commission vote in an interview with Reuters. “The pandemic made it clear that broadband is an essential service, that every one of us – no matter who we are or where we live – needs it to have a fair shot at success in the digital age”

Engadget reported that the Federal Communications Commission (FTC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2027.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lest the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-pay “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FTC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

ArsTechnica also reported that the Federal Communications Commission has scheduled an April 25 vote to restore net neutrality rules similar to the ones it introduced during the Obama era and repealed under former President Trump.

“After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the Internet remains fast, open, and fair,” FCC Chairwoman Jessica Rosenwrocel said today. “A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

According to ArsTechnica, while there hasn’t been a national standard since then-Chairman Ajit Pai led a repeal in 2017, Internet service providers still have to follow the net neutrality rules because California and other states impose their own similar regulations.

In my opinion, the FCC’s decision to vote for net neutrality is an excellent idea. Anything that makes it easier for people to use the internet (ideally, without having to pay to look at a website that blocks content) will make people less frustrated when searching for news.


Amazon Gives Up On No-Checkout Shopping In Its Grocery Stores



Amazon has decided to give up on its Just Walk Out program that lets customers leave its brick-and-mortar grocery stores without a formal checkout process, The Verge reported.

Instead, it’s switching fully to “Dash Carts,” where customers can scan products as they toss them in their cart.

That’s according to The Information, which reports that the company is pulling Just Walk Out from all larger stores where the system is in place and “sprucing up the stores across the board” as it prepares to expand Amazon Fresh locations this year. Amazon will keep using its smaller corner stores, though.

Amazon hasn’t managed to get a handle on in-person retail despite buying the upscale, popular Whole Foods chain back in 2017. Over the years, the online shopping giant has closed all of its Books, 4-Star, and Pop-up stores and halted the expansion of its Fresh stores.

According to The Verge, with the company falling back to further its Dash Carts, it’s essentially shrinking self-checkout into a contraption with scanners and a touchscreen, bolted onto special shopping carts — something that other retailers have tried in the US and in Europe — followed by checking out with a palm scanner. That has benefits like customers being able to keep a running total while they shop, but Amazon would still face hurdles.

Gizmodo reported Amazon is phasing out its checkout-less grocery stores with “Just Walk Out” technology. The company’s senior vice president of grocery stores says they’re moving away from Just Walk Out, which relied on cameras and sensors to track what people were leaving the store with.

Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

Instead, Amazon is moving towards Dash Carts, a scanner and a screen that’s embedded in your shopping cart, allowing you to checkout as you shop. These offer a more reliable solution than Just Walk Out. Amazon Fresh stores will also feature self check out counters from now on, for people who aren’t Amazon members.

Engadget also reported Amazon is removing Just Walk Out tech from all of its Fresh grocery stores in the U.S. The self-checkout system relies on a host of cameras, sensors and good old-fashioned human eyeballs to track what people leave the store with, charging the customers accordingly.

The technology has been plagued by issues from the onset. Most notably, Just Walk Out presents the illusion of automation, with Amazon crowing about generative AI and the like. Here’s where the smoke and mirrors come in. While the stores have no actual cashiers, there are reportedly over 1,000 real people in India scanning the camera feeds to ensure accurate checkouts.

According to Engadget, its also incredibly expensive to install and maintain the necessary equipment, which is likely why Just Walk Out technology was only adopted at around half of Fresh stores in the U.S.

In my opinion, it would be easier to just get your groceries from your local grocery store, or request a door-dash driver to bring the groceries you requested to your door.


Google Pledges To Destroy Browsing Data To Settle ‘Incognito’ Lawsuit



Google plans to destroy a trove of data that reflects millions of users’ web-browsing histories, part of a settlement of a lawsuit that alleged the company tracked people without their knowledge, The Wall Street Journal reported.

According to the Wall Street Journal, the class action lawsuit, filed in 2020, accused Google of misleading users about how Chrome tracked the activity of anyone who used the private “incognito” browsing option. The lawsuit alleged that Google’s marketing and privacy disclosures didn’t properly inform users of the kinds of data being collected, including details about which websites they viewed.

The settlement details, filed Monday in San Francisco federal court, set out the actions the company will take to change its practices around private browsing. According to the court filing, Google has agreed to destroy billions of data points that the lawsuit alleges it improperly collected, to update disclosures about what it collects in private browsing and to give users the option to disable third-party cookies in that setting.

The agreement doesn’t include damages for individual users. But the settlement will allow individuals to file claims. Already the plaintiff attorneys have filed 50 in California state court.

CBS News reported Google will destroy a vast trove of data as part of a settlement over a lawsuit that accused the search giant of tracking consumers even when they were browsing the web using “incognito” mode, which ostensibly keeps people’s online activity private.

The details of the settlement were disclosed Monday in San Francisco federal court, with a legal filing noting that Google will “delete and/or remediate billions of data records that reflect class members’ private browsing activities.”

The settlement stems from a 2020 lawsuit that claimed Google misled users into believing that it wouldn’t track their internet activities while they used incognito. The settlement also requires Google to change incognito mode so that users for the next five years can block third-party cookies by default.

“This settlement is an historic step in requiring dominant technology companies to be honest in their representations to users about how the companies collect and employ user data, and to delete and remediate data collected,” the settlement filing states.

“This settlement ensures real accountability and transparency from the world’s largest data collector and marks an important step toward improving and upholding our right to privacy on the internet,” the court document stated.

The Hill reported Google agreed to rewrite the disclosure that appears at the beginning of every “incognito mode” session to inform users that it collected data from private browsing sessions, according to court documents filed Monday.

“This settlement is an historic step in requiring dominant technology companies to be honest in their representations to users about how companies collect and employ user data, and to delete and remediate data collected,” the filing, submitted by the plaintiffs’ attorneys, reads.

In my opinion, Google shouldn’t have collected users data at all. Incognito mode was probably designed to imply that Google wouldn’t grab users data. Instead, Google grabbed it anyway.