Category Archives: Privacy

Clubhouse Introduces Payments



Clubhouse, a new social media thing that allows people to have live audio-chats with friends and strangers, has introduced “Payments”. This does not mean that people who use Clubhouse will have to pay a fee in order to keep using it. Instead, it gives users the ability to send money to someone else through Clubhouse.

Today, we’re thrilled to begin rolling out Payments – our first monetization feature for creators on Clubhouse. All users will be able to send payments today, and we’ll be rolling out the ability to receive payments in waves, starting with a small test group today. Our hope is to collect feedback, fine-tune the feature, and roll it out to everyone soon.


Here is how Clubhouse payments will work:

  • To send a payment in Clubhouse, just tap the profile of a creator (who has the feature enabled) and tap “Send Money.”
  • Enter the amount you would like to send them. The first time you do this, you’ll be asked to register a credit card or debit card.
  • 100% of the payment will go to the creator. The person sending the money will also be charged a small card processing fee, which will go directly to our payment processing partner, Stripe. Clubhouse will take nothing.

Clubhouse makes it clear that this is the “first of many features that allow creators to get paid directly on Clubhouse”. In other words, if this works, Clubhouse might add more payment features. What will people pay for? I suppose Clubhouse is hoping to find that out.

Stripe is a well known payment provider. Creators who post their work on Medium, and make money from doing so, are paid through Stripe. Substack also uses it. I have no problem with Clubhouse’s choice of payment provider.

My concern is that Clubhouse has a history of not respecting user’s privacy. Users are pushed to upload their entire contact list from their phone.

Doing so gives Clubhouse information about who you are connected to. It will use that information to try and connect you to your contacts that are on Clubhouse. Will Ormus pointed out on Medium that if you have an ex or harasser, who has you in their contacts, Clubhouse will know you are connected to that person and make recommendations on that basis.

What will Clubhouse do with your credit card information? Users will be giving it to Stripe – but they have to go through Clubhouse to do that.


Clubhouse Chats have been Breached and Streamed Online



A Clubhouse user was able to find a way to share Clubhouse chats outside of the iOS app. According to Bloomberg, Clubhouse “permanently banned” that user, and has installed new “safeguards”. It is unclear what those safeguards are, or how effective they will be, given what is known about Clubhouse.

Stanford Internet Observatory reported that Agora, a Shanghai-based startup, with U.S. headquarters in Silicon Valley, created a platform for other software companies to build upon. Clubhouse is one of the apps using Agora’s platform. According to the Stanford Internet Observatory, “If an app operates on Agora’s infrastructure, the end-user might have no idea.” In short, Agora hosts Clubhouse’s traffic.

Stanford Internet Observatory’s analysts observed Clubhouse’s web traffic using publicly available network analysis tools, such as Wireshark. Their analysis revealed that outgoing web traffic is directed to servers operated by Agora. Joining a channel generates a packet directed to Agora’s back-end infrastructure.

The packet contains metadata about each user, including their unique Clubhouse ID number and the room ID they are joining. That metadata is sent over the internet in plaintext (not encrypted), meaning that any third-party with access to a user’s network traffic can access it. In this manner, an eavesdropper might learn whether two users are talking to each other, for instance, by detecting whether those users are joining the same channel.

Stanford Internet Observatory made it clear why Agora’s hosting of Clubhouse matters:

Because Agora is based jointly in the U.S. and China, it is subject to People’s Republic of China (PRC) cybersecurity law. In a filing to the U.S. Security and Exchange Commission, the company acknowledged that it would be required to “provide assistance and support in accordance with [PRC] law,” including protecting national security and criminal investigations. If the Chinese government determined that an audio message jeopardized national security, Agora would be legally required to assist the government in locating and storing it.

Chief Executive Officer of Internet 2.0, Robert Potter, posted an interesting thread about the Clubhouse situation on Twitter. He points out that it was not a “hack”. “A user set up a way to remotely share his login with the rest of the world. The real problem was that folks thought these conversations were ever private.”

In that thread. Robert Potter tweeted: “The end result of this whole clubhouse experience is that folks have put a lot of data online without considering the privacy implications. I’d strongly recommend people to build more encryption fenced communities for these sorts of conversations in the future.”

The more I learn about Clubhouse the more I think it is a bad idea. I am aware that there are people who enjoy checking out the newest apps, especially if there is a social aspect to them. In my opinion, joining this Clubhouse comes at too high a cost to people’s privacy.


Clubhouse Does Not Respect Your Privacy



Those of you who have started using Clubhouse may want to reconsider that decision. It turns out that Clubhouse does not respect your privacy at all. It also appears to be grabbing up people’s contact lists, which not only is intrusive on a person’s privacy but also could put some people in danger.

The Guardian reported that the live audio-chats had in conversation rooms disappear. That said, Clubhouse doesn’t have any features that would prevent someone from live-blogging the conversation or recording it and uploading it to YouTube.

Will Oremus posted on Medium about his experience with Clubhouse: “When I granted the app access to my contacts, within hours it was nudging me to invite my former pediatrician, barber, and a health worker who once cared for my dying father to join Clubhouse – and sending me push notifications every time someone from my contacts signed up so I could welcome them via a private chat and “walk them in”.

He pointed out that the contact list in your phone can include “old acquaintances, business associates, doctors, bosses, and people you went on a bad date with.”:

“…When you upload those numbers, not only are you telling the app developer that you are connected to those people, but you’re also telling it that those people are connected to you – which they might or might not have wanted the app to know. For example, say you have an ex or even a harasser you’ve tried to block from your life, but they still have your number in their phone; if they upload their contacts, Clubhouse will know you’re connected to them and make recommendations on that basis…”

Mashable reported that it is difficult to delete a Clubhouse account. To do it, you have to send Clubhouse an email in order to request a delete. It is unclear how long Clubhouse takes to process account deletion requests. Mashable also reported that Clubhouse requires access to your entire contact list for the purpose of sending invites.

Personally, I’m going to stay far away from Clubhouse. To me, it feels very sketchy to push users to give Clubhouse access to the contact list on their phone. I find it impossible to trust an app that demands to use the information on something that, for many, is extremely personal.


Google Faces $5 Billion Lawsuit for Invading Privacy of Users



Google users might be surprised to learn that “private” mode doesn’t actually mean that Google won’t track your internet use. Reuters reported that there is a proposed class action lawsuit against Alphabet Inc. that is seeking at least $5 billion. The lawsuit alleges that Google has been illegally invading the privacy of millions by tracking their internet use through browsers set in “private” mode.

The case is Brown et al v Google LLC et al, which was filed in the U.S. District Court, Northern District of California. I tried to find more information about this lawsuit, but could not find anything. Typically, a controversial lawsuit, that has the potential to affect many people, is embedded somewhere online. This one does not appear to be.

Google calls their private mode “incognito mode”. It would be reasonable to presume that a private mode would enable users to find information that they would not be comfortable having Google know about. For example, people might choose to look up “intimate and potentially embarrassing things” (as the lawsuit states) in Incognito mode, believing that Google would not track it.

According to the complaint filed in the federal court in San Jose, California, Google gathers data through Google Analytics, Google Ad Manager, and other applications and website plug-ins, including smartphone apps, regardless of whether users click on Google-supported ads.

Google spokesman Jose Castaneda told Reuters: “As we clearly state each time you open a new incognito tab, websites might be able to collect information about your browsing activity”. As you may have guessed, Google intends to defend itself vigorously against the claims in the lawsuit.

If this situation troubles you, there are other options. Some people prefer to use more ethical search engines such as Duck Duck Go, or Ecosia (which plants a tree for every search). Mozilla’s Firefox has the capability of blocking certain types of trackers. Keep in mind, though, that nothing on the internet is 100% private.

The lawsuit seeks at least $5,000 of damages per user for violations of federal wiretapping and California privacy laws. It will be very interesting to see if this case gets anywhere. Whenever a gigantic company is the defendant in a lawsuit, I have concerns that the case will disappear before a court can hear it.


Zoom Limits End-To-End Encryption to Paid Users



Those of you who are using Zoom, on a free account, might want to stop doing that. According to The Next Web, Zoom calls made by people who have free accounts won’t be encrypted. The end-to-end encryption is only for paid users.

Bloomberg reported that Zoom’s sales “soared” in the three months that ended on April 30, 2020. This happened due to a wave of stay-at-home orders put in place to prevent the spread of COVID-19. Those who suddenly found themselves working from home, and students whose schools shifted to virtual learning, started using Zoom. Clearly, Zoom has the money to add end-to-end encryption for all users.

Choosing not to do that is strange, especially since children use Zoom to access education. Churches and groups that focus on therapy and/or addiction have also used Zoom for meetings.

We’ve all heard about “Zoom-bombing”, which got so bad that the U.S. Department of Justice warned that “Zoom-bombing” can result in fines or imprisonment. That is a problem, but I don’t see how cutting off free users from end-to-end encryption will solve it.

The Next Web reported a quote from Zoom CEO Eric Yuan. “Free users, for sure, we don’t want to give that [end-to-end encryption]. Because we want to work it together with the FBI and local law enforcement, in case some people use Zoom for bad purpose.”

Alex Stamos, whom The Next Web identified as a security consultant for Zoom, tweeted: “Zoom is dealing with some serious issues. When people disrupt meetings (sometimes with hate speech, CSAM, exposure to children and other illegal behaviors) that can be reported by the host. Zoom is working with law enforcement on the worst repeat offenders.”

From this, it sounds like Zoom believes that free users cause shenanigans. But, that paints all free users with the same brush, and that’s not acceptable. I think Zoom will lose customers over this decision. I don’t think parents of kids who use Zoom for school, people who attend church through Zoom, or those who access self-help meetings on Zoom, will feel comfortable having law enforcement monitoring their Zoom calls.


Tech Companies Urge Congress to Protect Search and Browsing Data



Several tech companies are asking the U.S. House of Representatives to pass legislation that would prevent the FBI from obtaining people’s browser history without a warrant. The tech companies include: Mozilla, Reddit, Twitter, and Patreon.

Mozilla Corporation, Engine, Reddit, Inc., Reform Government Surveillance, Twitter, i2Coalition, and Patreon sent a letter to Speaker Nancy Pelosi, Minority Leader Kevin McCarthy, Chairman of the U.S. House Committee on the Judiciary Jerry Nadler, and Ranking Member of the U.S. House Committee on the Judiciary. From the letter:

We urge you to explicitly prohibit the warrantless collection of internet search and browsing history when you consider the USA FREEDOM Reauthorization Act (H.R. 6172) next week. As leading internet businesses and organizations, we believe privacy and security are essential to our economy our businesses, and the continued growth of the free and open internet. By clearly reaffirming these protections, Congress can help preserve user trust and facilitate the continued use of the internet as a powerful contributing force for our recovery.

This comes after the U.S. Senate voted down an amendment to the USA Patriot Act that would create a tougher standard for government investigators to collect web search and browsing histories of people in the states.

It was a bipartisan amendment that would have required the Department of Justice to show probable cause when requesting approval from the Foreign Intelligence Surveillance Court to collect the data for counterterrorism or counterintelligence investigations.


Apple and Google Released a FAQ About their Coronavirus Tracker



Earlier this month, Google and Apple announced a joint effort to enable the use of Bluetooth technology to help governments and health agencies reduce the spread of the COVID-19 virus. As you may have expected, people had questions about how that contact tracing technology would work.

In response, Apple and Google released a Frequently Asked Questions PDF with more information. Some of it explains what contact tracing is, how it works, and how it can help slow the spread of COVID-19. It also covers how their contact tracing system will protect user privacy.

Here are some key points about user privacy:

  • Each user will have to make an explicit choice to turn on the technology. It can also be turned off by the user at any time by uninstalling the contract tracing application or turning off exposure notification in Settings.
  • This system does not collect location data from your device, and does not share the identities of other users to each other, Google or Apple. The user controls all data they want to share, and the decision to share it.
  • Bluetooth privacy-preserving beacons rotate every 10-20 minutes, to help prevent tracking.
  • Exposure notification is only done on device and under the user’s control. In addition people who test positive are not identified by the system to other users, or to Apple or Google.
  • The system is only used for contract tracing by public health authorities apps.
  • Google and Apple can disable the exposure notification system on a regional basis when it is no longer needed.

However, the FAQ also makes it clear that government health authorities will have access to the information facilitated by the app. “Access to the technology will be granted only to public health authorities. Their apps must meet specific criteria around privacy, security, and data control. The public health authority app will be able to access a list of beacons provided by users confirmed as positive for COVID-19 who have opted into sharing them. The system was also designed so that Apple and Google do not have access to information related to any specific individual.”

The FAQ states a user can choose to report a positive diagnosis of COVID-19 to their contact tracing app. The user’s most privacy-preserving beacons will be added to the positive diagnosis list shared by the public health authority so that others who came in contact with those beacons can be alerted. I don’t see how that can be done without the app being able to identify one individual user from another.

It comes down to how much you trust your government to use the information from the app to help people. This sort of heath information can be used to prevent people from being eligible for health insurance coverage, or to be discriminated against in other ways. Personally, I am not going to use this app.