Trudeau Announces $2.4 billion For AI-Related Investments



The Liberal government is setting aside $2.4 billion in its upcoming budget to build capacity in artificial intelligence, Prime Minister Justin Trudeau announced Sunday, CBS reported.

The bulk of that — $2 billion — is going to a fund that aim to provide access to computing capabilities and technical infrastructure.

He said the federal government will begin consulting with industry soon on a new AI Compute Access Fund and an accompanying strategy to expand the sector in Canada.

“We want to help companies adopt AI in a way that will have positive impacts for everyone,” Trudeau said, adding that $200 million will go towards boosting the adoption of AI sectors like agriculture, health care, and clean technology.”

The government plans to launch a $50-million AI institute to protect against what it calls “advanced or nefarious AI systems,” and another $5.1 million will go toward an office of AI and Data Commissioner to enforce the proposed Artificial Intelligence and Data Act.

Prime Minister of Canada Justin Trudeau posted “Securing Canada’s AI advantage” Here are some key points from it:

Investing $2 billion to build and provide access to computing capabilities and technological infrastructure for Canada’s world-leading AI researchers, start-ups, and scale-ups: As part of this investment, we will soon be consulting with AI stakeholders to inform the launch of a new AI Compute Access Fund to provide near-term support to researchers and industry. We will also develop a new Canadian AI Sovereign Compute Strategy to catalyze the development of Canadian-owned and located AI infrastructure. Ensuring access to cutting-edge computing infrastructure will attract more global AI investment to Canada, develop and recruit the best talent, and help Canadian businesses compete and succeed on the world stage.

Boosting AI start-ups to bring new technologies to market, and accelerating AI adoption in critical sectors, such as agriculture, clean technology, health care, and manufacturing, with $200 million in support through Canada’s Regional Development Agencies.

Investing $100 million in the NCP IRAP AI Assist Program to help small and medium-sized businesses scale up and increase productivity by building and deploying new AI solutions. This will help companies incorporate AI into their businesses and take on research, product development, testing, and validation work for new AI-based solutions.

Supporting workers who may be impacted by AI, such as creative industries, with $50 million for the Sectoral Workforce Solutions Program, which will provide new skills training for workers in potentially disrupted sectors and communities.

Creating new Canadian AI Safety Institute, with $50 million to further the safe development and deployment of AI. The Institute, which will leverage input from stakeholders and work in coordination with international partners, will help Canada better understand and protect against the risks of advanced or nefarious AI systems, including to specific communities.

Strengthening enforcement of the Artificial Intelligence and Data Act, with $5.1 million for the Office of the AI and Data Commissioner. The proposed Act aims to guide AI innovation in a positive direction to help ensure Canadians are protected from potential risks ensuring the responsible adoption of AI by Canadian businesses.

In my opinion, it sounds like the Canadian Government has put a lot of thought about what they want to have in their AI programs. Prime Minister Trudeau appears to have a high opinion of what AI can do for Canada.

 


Fake Facebook MidJourney AI Page Promoted Malware To 1.2 Million People



Hackers are using Facebook advertisements and hijacked pages to promote fake Artificial Intelligence services, such as MidJourney, OpenAI’s SORA and ChatGPT-5, and DALL-E, to infect unsuspecting users with password-stealing malware, Bleeping Computer reported.

The malvertising campaigns are created by hijacked Facebook profiles that impersonate popular AI services, pretending to offer a sneak peak of new features.

Users tricked by the ads become members of fraudulent Facebook communities, where the threat actors post news, AI-generated images, and other related info to make pages look legitimate.

However, the community posts often promote limited-time access to upcoming and eagerly anticipated AI-services, tricking the users into the download malicious executables that infect Windows computers with information-stealing malware like Rilide, Vidar, IceRAT, and Nova.

Information-stealing malware focuses on stealing data form a victim’s browser, including stored credentials, cookies, cryptocurrency wallet information, autocomplete data, and credit card information.

The Record reported cybercriminals are taking over Facebook pages and using them to advertise fake generative artificial intelligence software loaded with malware.

According to researchers at the cybersecurity company Bitdefender, the cybercrooks are taking advantage of the popularity of new generative AI tools and using “malvertising” to impersonate legitimate products like Midjourney, Sora AI, ChatGPT-5, and others.

The campaigns follow a certain blueprint. Cybercriminals take over a Facebook account and begin to make changes to the page’s descriptions, cover and profile photo. According to Bitdefender, they make “the page seem as if it is run by well-known AI-based image and video generators.”

They then populate the pages with purported product news and advertisements for software, which are themselves generated with AI software.

The downloads contain various types of info steeling malware – like Riide, Vidar, IceRAT, and Nova Stealers — which are available for purchase on the dark web, allowing unsophisticated cybercriminals to launch attacks.

According to The Record, the most notable Facebook page hijack involved the application Midjourney, a popular tool for creating AI-generated images. Its hijacked page had 1.2 million followers and was active for nearly a year before it was shut down earlier this month.

Tom’s Guide reported once an account is compromised, the hackers then give it an AI-themed makeover with a new cover and profile photos as well as descriptions to make it appear as if it is run by one of the well-known AI-generated photos and advertisements to further impersonate whichever AI image generator of video generate service they want to leverage in their attacks.

During their investigation, Bitedefender’s security researchers found that the hackers responsible used a much different approach with MidJourney. For other AI tools, they urged visitors to download the latest versions from Dropbox or Google Drive, but with Midjourney, they created more than a dozen malicious sites that impersonated the tool’s actual landing page. These sites then tried to trick visitors into downloading the latest version of the took via a GoFile link.

In my opinion, the cybercriminals are obviously terrible people who want to take advantage of others. I’m hoping that Facebook has taken swift action against the crooks who likely caused harm to several Facebook users.


Meta Announces Approach To Labeling AI-Generated Content



Monika Bickert, Vice President of Content Policy at Meta posted information regarding their approach to labeling AI-generated content and manipulated media. 

We are making changes to the way we handle manipulated media on Facebook, Instagram and Threads based on feedback from the Oversight Board that we should update our approach to reflect a broader range of content that exists today and provide context about the content through labels. These changes are also informed by Meta’s policy review process that included extensive public opinion surveys and consultations with academics, civil society organizations, and others.

We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say. Our manipulated media policy was written in 2020 when realistic AI-generated content was rare and the overarching concern was about videos. 

In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving, As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.

The Board also argued that we unnecessarily risk restricting freedom of expression when we remove manipulated media that does not otherwise violate our Community Standards. It recommends a “less restrictive” approach to manipulated media like labels with context. 

In February, we announced we’ve been working with industry partners on common technical standards for identifying AI content, including video and audio. Our “Made with AI” labels on AI-generated video, audio, and image will be based on our detection of industry-shared signals of AI images or people self-disclosing that they’re uploading AI-generated content. We already add “Imagined with AI” to photorealistic images created using our Meta AI features.

TechCrunch reported Meta announced changes to its rules on AI-generated content and manipulated media, following criticism from its Oversight Board. Starting next month it said it will label a wider range of such content, including by applying a “Made with AI” badge to deepfakes (aka synthetic media). Additional contextual information may be shown when content has been manipulated in other ways that pose a high risk of deceiving the public on an important issue.

According to TechCrunch, the move could lead the social networking giant labeling more pieces of content that have the potential to be misleading — a step that could be important in a year of many elections taking place around the world. However, for deepfakes, Meta is only going to apply labels where the content in question has “industry standard AI-generated content.

AI generated content that falls outside those bounds will, presumably, escape unlabeled.

ArsTechnica reported Meta announced policy updates to stop censoring harmless AI-generated content and instead begin “labeling a wider range of audio, video, and image content as ‘Made with AI.”

Previously, Meta would only remove “videos that are created or altered by AI to make a person appear to say something they didn’t say,” The Oversight Board warned that this policy failed to address other manipulated media, including “cheap fakes,” manipulated audio, or content showing people doing things they’d never done.

In my opinion, it is a good idea for Meta to start adding “Made with AI” labels to connect that was detected as AI-generated. Doing so might reduce confusion on Meta’s sites. 


Google Phases Out Early Nest and Dropcam Models #1734



Google announced the discontinuation of support for early Dropcam, Dropcam Pro, and Nest Secure models, rendering these devices largely inoperable on April 8. To assist users in transitioning, Google offers a complimentary Nest camera for Nest Aware subscribers or a 50% discount on a new Nest camera for non-subscribers. Dropcam and Dropcam Pro users will lose the ability to save new clips, and they will have limited time to access existing ones. Nest Secure users will find their devices, such as the Nest x Yale door locks, unable to connect to WiFi. Google offers a free Nest Connect to extend the life of existing locks, with a reminder to contact support for details if not already received.

Subscribe to the Newsletter.
Join the Chat @ GeekNews.Chat
Email Todd or follow him on Facebook.
Like and Follow Geek News Central’s Facebook Page.
Download the Audio Show File
New YouTube Channel – Beyond the Office

Support my Show Sponsor: Best Godaddy Promo Codes
$11.99 – For a New Domain Name cjcfs3geek
$6.99 a month Economy Hosting (Free domain, professional email, and SSL certificate for the 1st year.) Promo Code: cjcgeek1h
$12.99 a month Managed WordPress Hosting (Free domain, professional email, and SSL certificate for the 1st year.) Promo Code: cjcgeek1w
Support the show by becoming a Geek News Central Insider

Continue reading Google Phases Out Early Nest and Dropcam Models #1734


Blue Check Marks Reappear On Some Large X Accounts



Almost a year since blue check marks disappeared from some large, influential accounts on X (formerly Twitter), they have started to return, NBC News reported. On Wednesday, some individuals with large followings started receiving notifications that they had received complementary premium features, including the return of the iconic blue check mark to their accounts.

In late March, platform owner Elon Musk posted that accounts with more than 2,500 verified subscriber followers would get X Premium features for free, and accounts with more than 5,000 verified subscriber followers will get X Premium+ features. One of those features is the blue check mark icon that previously served as an indicator of real accounts belonging to celebrities, journalists, influencers and other public figures.

In April 2023, as part of Musk’s takeover of the company, check marks were pulled from “legacy” verified accounts — which had been verified through a tightly controlled process meant to designate accounts as “notable.” Under Musk, verification became a paid feature for members of X Premium, in which virtually anyone could enroll. Politicians were given gray check marks and organizations could pay for gold check marks. At the same time, Musk said he gifted some premium features to some influential accounts, like author Stephen King, who posted that he had received the blue check against his will.

The Verge reported just as Elon Musk said, X is doling out the free Premium and Premium Plus memberships to accounts with a high number of verified followers. 

Multiple X users on Wednesday reported seeing the familiar blue “Verified” checkmark next to their handles despite not paying for either paid X subscription tier. Musk last week announced that X accounts with over 2,500 “verified subscriber followers” would receive a free Premium membership; while accounts with over 5,000 would receive a free Premium Plus membership.

Now, it appears that many influential X accounts with already large followings in the tens of thousands (which may translate to verified followings that cross the benchmark) are once again checkmarked, or will be, whether they like it or not.

TechCrunch reported X is giving free blue checks to users who have more than 2,500 “verified” followers, which are people who subscribe to X Premium. Popular posters will get a blue check, but not everyone is happy about it: People are now frantically posting to make it clear that they didn’t buy a blue check, but rather the blue check was foisted upon them.

According to TechCrunch, back in the days of yore, Twitter’s blue check indicated that a user was influential in some way. Back then, blue checks actually helped us determine public figures are who they say they are. So if someone was popular on Twitter, perhaps because they’re a celebrity, an influencer or a journalist, they would get a blue check, which could also help reduce the spread of misinformation.

In my opinion, it appears that some large X/Twitter accounts, who have received blue checkmarks, are not entirely thrilled about having it. That said, it might have at least one use – making it clear that the checkmark prevents other people from pretending to be a celebrity.


Google Podcast is Dead



What a sad ending to another Google endeavor that has happened so many times. Honestly, I am not surprised, as they have a track record of experimenting at the user’s expense. Killing off products that did not meet revenue expectations from a business perspective, I get it, but from a user experience, I hate it.

The loved Google Reader of the past was canceled when Google could not earn ad dollars from it as millions of people could consume content on it versus a platform they could monetize.

When Google Podcasts rolled out, I had high hopes after many years of working to help Android users find a decent Podcast app through my companies, SubscribeOnAndroid.com product, which gained a lot of traction with shows encouraging Android listeners to follow and subscribe.

We are now making sure that the product is again viable for Android users as Google has essentially created an extinction event for a percentage of Android Podcast listeners. Google has adopted a scorched earth policy and brainwashes people into thinking that YouTube is the end-all for Podcast consumption. Sure, some big shows are being discovered on their platform, but for the majority, it could be farther from the truth than anything to come out of their mouths.

Podcasters, by and large, hate the YouTube channel integration and the bevy of rules they must follow, which they do not have to follow in the traditional podcast space. The everyday challenges of being on that platform exist due to cancel culture and strict content monitoring.

Google will monetize against all your content, while the majority of podcasters will get nothing, no new audience, and indeed no money, as it’s nearly impossible to monetize through YouTube’s extensive hour-long listening rules to qualify to monetize podcasts unless you make the break and become and do a YT first strategy and put in the work to build a YT channel.

The fabled “podcast” menu item on YouTube only has a handful of successful so-called podcasts; everyone else is nowhere to be found. The average podcaster has little chance of breaking out, as we learned from Spotify. They don’t care about the podcast space. All they care about is monetizing on the back of creators.

Google Podcast, good riddance; we don’t need you anymore. Podcasters are taking back the podcasting space we created and expanding it through the Podcasting 2.0 initiative and great new apps at PodcastApps.com. We must rely on something other than gatekeepers to help the podcasting space grow and thrive. We have to do it on our own.

At the recent Podcast Movement Evolutions, YouTube presented to about 500 podcasters, and it was the most “I love me speech” I’ve heard in many years, and from the post-presentation reaction, they did not win many hearts and minds.

Meanwhile, Google Podcasts has abandoned millions of listeners and caused an extinction event, causing podcasters to lose 4% of their audience almost overnight.

We already are hearing podcasters scream bloody murder who have their heads down, as many do not follow the day-to-day news. On the one hand, it is a bad day for podcasting; on the other, we know whom we can trust to go forward.


FCC To Vote To Restore Net Neutrality Rules, Reversing Trump



The U.S. Federal Communications Commission will vote to reinstate landmark net neutrality rules and assume new regulatory oversight of broadband internet that was rescinded under former President Donald Trump, the agency’s chair said.

According to Reuters, the FCC told advocates on Tuesday of the plan to vote on the final rule at its April 25 meeting. The commission voted 3-2 in October on the proposal to reinstate open internet rules and adopted in 2015 and re-establish the commission’s authority over broadband internet.

Net neutrality refers to the principle that internet providers should enable access to all content and applications regardless of the source, and without favoring or blocking particular products or websites.

FCC Chair Jessica Rosenworcel confirmed the planned commission vote in an interview with Reuters. “The pandemic made it clear that broadband is an essential service, that every one of us – no matter who we are or where we live – needs it to have a fair shot at success in the digital age”

Engadget reported that the Federal Communications Commission (FTC) plans to vote to restore net neutrality later this month. With Democrats finally holding an FCC majority in the final year of President Biden’s first term, the agency can fulfill a 2021 executive order from the President and bring back the Obama-era rules that the Trump administration’s FCC gutted in 2027.

The FCC plans to hold the vote during a meeting on April 25. Net neutrality treats broadband services as an essential resource under Title II of the Communications Act, giving the FCC greater authority to regulate the industry. It lest the agency prevent ISPs from anti-consumer behavior like unfair pricing, blocking or throttling content and providing pay-to-pay “fast lanes” to internet access.

Democrats had to wait three years to enact Biden’s 2021 executive order to reinstate the net neutrality rules passed in 2015 by President Obama’s FTC. The confirmation process of Biden FCC nominee Gigi Sohn for telecommunications regulator played no small part. She withdrew her nomination in March 2023 following what she called “unrelenting, dishonest and cruel attacks.”

ArsTechnica also reported that the Federal Communications Commission has scheduled an April 25 vote to restore net neutrality rules similar to the ones it introduced during the Obama era and repealed under former President Trump.

“After the prior administration abdicated authority over broadband services, the FCC has been handcuffed from acting to fully secure broadband networks, protect consumer data, and ensure the Internet remains fast, open, and fair,” FCC Chairwoman Jessica Rosenwrocel said today. “A return to the FCC’s overwhelmingly popular and court-approved standard of net neutrality will allow the agency to serve once again as a strong consumer advocate of an open internet.”

According to ArsTechnica, while there hasn’t been a national standard since then-Chairman Ajit Pai led a repeal in 2017, Internet service providers still have to follow the net neutrality rules because California and other states impose their own similar regulations.

In my opinion, the FCC’s decision to vote for net neutrality is an excellent idea. Anything that makes it easier for people to use the internet (ideally, without having to pay to look at a website that blocks content) will make people less frustrated when searching for news.