Category Archives: Privacy

Clearview AI Settles Lawsuit Brought By ACLU



You’ve probably heard of Clearview AI, a company that unethically captured more than 10 billion “faceprints” from people’s online photos across the globe – without the consent do so. The American Civil Liberties Union (ACLU) filed a lawsuit against Clearview AI which recently resulted in a settlement.

The lawsuit was filed by the ACLU in Illinois state court in January of 2020 after The New York Times revealed that Clearview was building a secretive surveillance tool using biometric identifiers. Face recognition technology has helped Clearview to capture more than three billion faceprints, and counting, from images available online.

Illinois has a law called “Illinois Biometric Information Privacy Act” (BIPA), which was adopted in 2008 to ensure that Illinois residents would not have their biometric identifiers, including faceprint, captured and used without their knowledge and permission.

The groups represented by the ACLU in the lawsuit – including survivors of domestic violence and sexual assault, undocumented immigrants, communities of color, and members of other vulnerable communities, asked the court to order Clearview to delete faceprints gathered from Illinois residents without their consent and cease capturing new faceprints unless they comply with BIPA consent procedures.

The New York Times reported that Clearview AI agreed to settled the lawsuit brought by the ACLU. The settlement requires Clearview to not sell its database of what it said were more than 20 billion facial photos to most private individuals and businesses in the country. It can still sell that database to federal and state agencies.

The New York Times reported the following:

The agreement is the latest blow to the New York-based start-up, which built its facial recognition software by scraping photos from the web and popular sites, such as Facebook, LinkedIn, and Instagram. Clearview then sold its software to local police departments and government agencies, including the F.B.I. and Immigration and Customs Enforcement.

According to The New York Times, Clearview’s technology has been deemed illegal in Canada, Australia, and parts of Europe for violating privacy laws. Clearview also faces a provisional $22.6 million fine in Britain, as well as a 20 million-euro fine from Italy’s data protection agency.

Nathan Freed Wessler, a deputy director with the ACLU’s Speech, Privacy, and Technology Project, said in a statement to The New York Times: “Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profits. Other companies would be wise to take note, and other states should follow Illinois lead in enacting strong biometric privacy laws.”

I find it extremely troubling that Clearview appeared to think it was acceptable to secretly gather photos of people’s faces – without their permission. It is even worse that Clearview was selling those photos in an effort to enhance its own profits.


Brave Browser Is Rolling Out a De-AMP Feature To Enhance Privacy



Brave announced that they are rolling out a new feature called De-AMP, which allows brave users to bypass Google-hosted AMP pages, and instead visit the content’s publisher directly. Brave states that AMP harms users’ privacy, security, and internet experience, and just as bad, AMP helps Google further monopolize and control the direction of the Web.

Brave will protect users from AMP in several ways. Where possible, De-AMP will rewrite links and URLs to prevent users from visiting AMP pages altogether. And in cases where that is not possible, Brave will watch as pages are being fetched and redirect users away from AMP pages before the page is even rendered, preventing AMP/Google code from being loaded and executed.

The Verge reported that Brave stated: “In practice, AMP is harmful to users and to the Web at large”. Brave also explained that AMP gives Google even more knowledge of users’ browsing habits, confuses users, and can often be slower than normal webpages. It also warned that the next version of AMP – so far called AMP 2.0 – will be even worse.

Brave pointed out why AMP is harmful:

AMP is harmful to privacy: AMP gives Google an even broader view of which pages people view on the Web, and how people interact with them. AMP encourages developers to more tightly integrate with Google servers and systems, and penalizes publishers with decreased search rankings and placements if they don’t, further allowing Google to track and profile users.

AMP is bad for security: By design, AMP confuses users about what site they’re interacting with. Users think they’re interacting with the publisher, when in actuality the user is still within Google’s control.

AMP furthers the monopolization of the Web: AMP encourages more of the Web to be served from Google’s servers, under Google’s control and arbitrary non-standards. It also allows Google to require pages to be built in ways that benefit Google’s advertising systems.

AMP is bad for performance and usability: Though Google touts AMP as better for performance, internally, Google knows that “AMP only improves the ‘median of performance’ and AMP pages can actually load slower than other publisher speed optimization techniques”.

The Verge explained that AMP was controversial from the beginning and smelled to some like Google trying to exert even more control over the web. Over time, more companies and users grew concerned about that control and chafed at the idea that Google would prioritize AMP pages in search results.

DuckDuckGo tweeted: “NEW: our apps & extensions now protect against AMP tracking. When you load or share a Google AMP page anywhere from DuckDuckGo apps (iOS/Android/Mac) or extensions (Firefox/Chrome) the original publisher’s webpage will be used in place of the Google AMP version.”

Personally, I think that the more privacy online, the better the internet will be for all of us. It is great that Brave and DuckDuckGo are offering people simple solutions to prevent Google from tracking them all over the web. It is very sketchy of Google to trick users into thinking they are on the website they searched for – but swapping it with an AMP page.


DuckDuckGo Introduces Beta Launch of DuckDuckGo for Mac



DuckDuckGo announced the beta launch of DuckDuckGo for Mac. Like their mobile app, DuckDuckGo for Mac is an all-in-one privacy solution for everyday browsing with no complicated settings, just a seamless experience. DuckDuckGo for Windows is coming soon.

Using an app designed to protect your privacy by default not only reduces invasive tracking, it also speeds up browsing and eliminates many everyday annoyances like cookie consent pop-ups.

DuckDuckGo for Mac gives you privacy by default. With one download you get their built-in private search engine, powerful tracker blocker, “new” cookie pop-up protection on approximately 50% of sites (with that % growing significantly throughout beta), Fire Button (one-click data clearing), email protection and more – all for free. No complicated privacy settings, just simple privacy protection that works by default.

DuckDuckGo for Mac is really fast! By using your computer’s built-in website rendering engine (the same one Safari uses), and by blocking trackers before they load (unlike all the major browsers), you’ll get really fast browsing. It is already faster than Chrome on some graphics performance (using the Motion Mark 1.2 benchmark) and as an additional benefit, by blocking trackers, DuckDuckGo uses about 60% less data than Chrome.

DuckDuckGo for Mac is built for security. Their built-in Smarter Encryption ensures you navigate to the encrypted (HTTPS) version of a website more often, and their tracking blocker means less exposure to third-party scripts that could try to access your data. And they design their product so that all in-app data, like history, bookmarks, and passwords, by default are only stored locally on your device and aren’t accessible to DuckDuckGo.

The beta of DuckDuckGo for Mac has a waitlist. The company is letting new people off the waitlist, and says the sooner you join, the sooner you’ll get it. You won’t need to share any personal information to join. Instead you’ll secure your place in line with a date and time that exists solely on your device. DuckDuckGo will notify you when they are ready for you to join the beta.

Right now, the beta is Mac only. The Windows beta is coming. DuckDuckGo recommends Windows users follow them on Twitter for updates.

Personally, I would like to check out DuckDuckGo for Mac after it launches. Im not a fan of having my data taken without my permission by websites that want to use it for their own financial benefit.

Doing that is very likely illegal, because I live in California which instituted the California Privacy Rights Act (CPRA) in 2020. There have been countless times when a website intentionally makes it extremely difficult to shut off their tracking and cookies. I would love to see DuckDuckGo for Mac shut all that garbage off for me!


WhatsApp Lets Users Control How Long Messages Stick Around



WhatsApp announced that they are providing users with more options to control their messages and how long they stick around, with default disappearing messages and multiple durations. This makes sense considering that WhatsApp introduced disappearing messages last year, and a way for photos and videos to immediately disappear after being viewed once.

WhatsApp users will now have the option to turn on disappearing messages by default for all new chats. When enabled, all new one-on-one chats you or another person start will be set to disappear at your chosen duration, and we’ve added a new option when creating a group chat that lets you turn it on for groups you create. This new feature does not change or delete any of your existing chats.

Two new durations for disappearing messages have been added: 24 hours and 90 days, as well as the existing option of 7 days. If you want to enable the new durations, or change the ones you have, go to Privacy and select “Default Message Timer”.

Based on WhatsApp’s blog post, it appears these decisions may have been influenced by the pandemic. The company wrote, “Living apart from family and friends for over a year has made it clearer than ever that just because we can’t physically talk in person, it doesn’t mean we should have to sacrifice the privacy of our personal conversations.”

The WhatsApp blog also states: “We believe disappearing messages along with end-to-end encryption are two crucial features that define what it means to be a private messaging service today, and bring us one step closer to the feeling of an in-personal conversation.”

The Verge pointed out that WhatsApp is owned by Meta (along with Facebook and Instagram). If you don’t currently use Facebook products, you might want to consider if you trust Meta enough to do the right thing with your data.


Google’s FLoC is Unpopular with Other Browser Creators



Google introduced a new piece of technology called “Federated Learning Cohorts” (FLoC). According to Google, FLoC “protects your privacy” because it “allows you to remain anonymous as you browse across websites and also improves privacy by allowing publishers to present relevant ads to large groups (called Cohorts)”.

EFF has launched “Am I FloCed?” It is a new site that will tell you whether your Chrome browser has been “turned into a guinea pig for Federated Learning of Cohorts or FLoC, Google’s latest targeted advertising experiment.”

Google’s FloC is unpopular with other browser creators. Brave posted a blog titled: “Why Brave Disables FLoC”:

“Brave opposes FloC, along with any other feature designed to share information about you and your interests without your fully informed consent. To protect Brave users, Brave has removed FLoC in the Nightly version of both Brave for desktop and Android. The privacy-affecting aspects of FLoC have never been enabled in Brave releases; the additional implementation details of FLoC will be removed from all Brave releases with this week’s stable release. Brave is also disabling FLoC on our websites, to protect Chrome users learning about Brave.”

Vivaldi posted a blog post titled: “No, Google! Vivaldi users will not get FLoC’ed.” In the blog post, Vivaldi makes it clear it does not support FLoC, which they call “a privacy-invasive tracking technology”. From the blog post:

“The FLoC experiment does not work in Vivaldi. It relies on some hidden settings that are not enabled in Vivaldi… Although Vivaldi uses the Chromium engine, we modify the engine in many ways to keep the good parts but make it safe for users; we do not allow Vivaldi to make that sort of call to Google.”

DuckDuckGo posted a blog in which it points out that you can use the DuckDuckGo Chrome extension to block FLoC’s tracking.

Mozilla gave The Verge a statement that included: “We are currently evaluating many of the privacy preserving advertising proposals, including those put forward by Google, but have no current plans to implement any of them at this time. We don’t buy into the assumption that the industry needs billions of data points about people, that are collected and shared without their understanding, to serve relevant advertising.”

Opera gave The Verge a statement that included: “While we and other browsers are discussing new and better privacy-preserving advertising alternatives to cookies including FLoC, we have no current plans to enable features like this in the Opera browsers in their current form”.

The fact that so many browser creators have decided against enabling Google’s FLoC is significant. It means that FLoC is really bad for users, and that Google should not impose it upon people who use Chrome.


Clubhouse Introduces Payments



Clubhouse, a new social media thing that allows people to have live audio-chats with friends and strangers, has introduced “Payments”. This does not mean that people who use Clubhouse will have to pay a fee in order to keep using it. Instead, it gives users the ability to send money to someone else through Clubhouse.

Today, we’re thrilled to begin rolling out Payments – our first monetization feature for creators on Clubhouse. All users will be able to send payments today, and we’ll be rolling out the ability to receive payments in waves, starting with a small test group today. Our hope is to collect feedback, fine-tune the feature, and roll it out to everyone soon.


Here is how Clubhouse payments will work:

  • To send a payment in Clubhouse, just tap the profile of a creator (who has the feature enabled) and tap “Send Money.”
  • Enter the amount you would like to send them. The first time you do this, you’ll be asked to register a credit card or debit card.
  • 100% of the payment will go to the creator. The person sending the money will also be charged a small card processing fee, which will go directly to our payment processing partner, Stripe. Clubhouse will take nothing.

Clubhouse makes it clear that this is the “first of many features that allow creators to get paid directly on Clubhouse”. In other words, if this works, Clubhouse might add more payment features. What will people pay for? I suppose Clubhouse is hoping to find that out.

Stripe is a well known payment provider. Creators who post their work on Medium, and make money from doing so, are paid through Stripe. Substack also uses it. I have no problem with Clubhouse’s choice of payment provider.

My concern is that Clubhouse has a history of not respecting user’s privacy. Users are pushed to upload their entire contact list from their phone.

Doing so gives Clubhouse information about who you are connected to. It will use that information to try and connect you to your contacts that are on Clubhouse. Will Ormus pointed out on Medium that if you have an ex or harasser, who has you in their contacts, Clubhouse will know you are connected to that person and make recommendations on that basis.

What will Clubhouse do with your credit card information? Users will be giving it to Stripe – but they have to go through Clubhouse to do that.


Clubhouse Chats have been Breached and Streamed Online



A Clubhouse user was able to find a way to share Clubhouse chats outside of the iOS app. According to Bloomberg, Clubhouse “permanently banned” that user, and has installed new “safeguards”. It is unclear what those safeguards are, or how effective they will be, given what is known about Clubhouse.

Stanford Internet Observatory reported that Agora, a Shanghai-based startup, with U.S. headquarters in Silicon Valley, created a platform for other software companies to build upon. Clubhouse is one of the apps using Agora’s platform. According to the Stanford Internet Observatory, “If an app operates on Agora’s infrastructure, the end-user might have no idea.” In short, Agora hosts Clubhouse’s traffic.

Stanford Internet Observatory’s analysts observed Clubhouse’s web traffic using publicly available network analysis tools, such as Wireshark. Their analysis revealed that outgoing web traffic is directed to servers operated by Agora. Joining a channel generates a packet directed to Agora’s back-end infrastructure.

The packet contains metadata about each user, including their unique Clubhouse ID number and the room ID they are joining. That metadata is sent over the internet in plaintext (not encrypted), meaning that any third-party with access to a user’s network traffic can access it. In this manner, an eavesdropper might learn whether two users are talking to each other, for instance, by detecting whether those users are joining the same channel.

Stanford Internet Observatory made it clear why Agora’s hosting of Clubhouse matters:

Because Agora is based jointly in the U.S. and China, it is subject to People’s Republic of China (PRC) cybersecurity law. In a filing to the U.S. Security and Exchange Commission, the company acknowledged that it would be required to “provide assistance and support in accordance with [PRC] law,” including protecting national security and criminal investigations. If the Chinese government determined that an audio message jeopardized national security, Agora would be legally required to assist the government in locating and storing it.

Chief Executive Officer of Internet 2.0, Robert Potter, posted an interesting thread about the Clubhouse situation on Twitter. He points out that it was not a “hack”. “A user set up a way to remotely share his login with the rest of the world. The real problem was that folks thought these conversations were ever private.”

In that thread. Robert Potter tweeted: “The end result of this whole clubhouse experience is that folks have put a lot of data online without considering the privacy implications. I’d strongly recommend people to build more encryption fenced communities for these sorts of conversations in the future.”

The more I learn about Clubhouse the more I think it is a bad idea. I am aware that there are people who enjoy checking out the newest apps, especially if there is a social aspect to them. In my opinion, joining this Clubhouse comes at too high a cost to people’s privacy.