Category Archives: Security

Zoom Expands to Smart Displays at Home



Zoom announced that they are rolling out support for Portal from Facebook, Amazon Echo Show, and Google Nest Hub Max. This will make interactive video meetings as easy as the touch of a button or the sound of your voice. Zoom also points out that this feature can be used to connect by video to family and friends.

I can see where this could be useful for people who have disabilities that make it difficult for them to use their hands. Being able to attend a Zoom meeting by using voice controls would make the experience more accessible. It could also be good for people who need help setting up Zoom on their computer or laptop, and who may find it difficult to log in when they need to.

There are many reasons not to trust Zoom. They have a history of security failures, including a problem that allowed Zoom to enable a user’s camera without the users permission. At the time, uninstalling Zoom did not fix the problem. In June of this year, Zoom decided to limit end-to-end encrypting only to paid users – which they later opened up to free accounts after backlash.

The reality is that there are many people who are working from home and who are required to use Zoom for work meetings. One advantage of using Zoom on a smart display is the option to take Zoom off your computer or laptop. A Zoom Meetings user could log into one of the smart devices that are supported by Zoom, and integrate their calendar, status, and meeting settings.

Zoom will be rolling out to Portal from Facebook in select regions in September. It will roll out to Amazon Echo Show devices in the United States later this year, beginning with Echo Show 8. Zoom will roll out to Nest Hub max later this year.


Zoom will add End-to-End Encryption to Free Accounts



As you may recall, earlier this month Zoom revealed that it would only enable end-to-end encryption on paid accounts. The free accounts were not going to get that protection. After public outcry (and, I suspect, loss of customers), Zoom now says it will add end-to-end encryption for all users starting in July of 2020.

Since releasing the draft design of Zoom’s end-to-end encryption (E2EE) on May 22, we have engaged with civil liberties organizations, our CISO council, child safety advocates, encryption experts, government representatives, our own users, and others to gather feedback on this feature. We have also explored technologies to enable us to offer E2EE to all tiers of users.

Zoom has released an updated E2EE design on GitHub.

In its blog post, Zoom states that the updated E2EE design “balances the legitimate right of all users to privacy and the safety of users on our platform.” In addition, Zoom says the new design will enable them to “maintain the ability to prevent and fight abuse” on their platform.

There is a bit of a “catch”, however. Free/Basic users will not automatically have the E2EE applied. In order to get it, these users must give Zoom a verifying phone number via a text message.

In other words, users have to give Zoom more information before they can get E2EE protections. I’m not sure how many people trust Zoom with their phone number, considering (as TechCrunch reported in April) Zoom routed some calls made in North America through China – along with encryption keys.

Zoom says the early beta of the E2EE feature will begin in July of 2020. Betas are known to be a bit wonky, as users discover “bugs” and other problems. I wouldn’t consider a beta of E2EE to offer much protection.

Hosts of Zoom calls will be able to toggle E2EE on or off on a per-meeting basis. Account administrators will also be able to enable and disable E2EE at the account and group level. To me, it sounds like people using a free Zoom account will be told they have E2EE protection (sometime after the beta ends). But, they won’t really have it if their employer can turn it off.


BBC Omits Central Database in Contact Tracing App Story



With the UK’s NHS Contract Tracing app being tested in the Isle of Wight this week, the BBC ran a story on how the app works in the evening news today. While the lovely graphics illustrated how the app worked, the story conveniently forgot to mention that all the contact data collected goes back to a central database.

Unlike much of the free world, instead of adopting the Google-Apple decentralised approach, the NHS has gone ahead with its plans to base its tracking on a central database – there’s more at The Register and The Guardian newspaper. Simplistically, while both versions use Bluetooth proximity to detect others nearby, in the Google-Apple model only the phones know with whom you have been in contact. In the NHS version, the contact data is passed back to a central server for contact matching. This is manna from heaven for a UK government which has a reputation for increasing levels of privacy abuse.

So it’s all very handy then that the BBC omitted to mention that all the app users’ contact tracing information, which will likely include location data, will be neatly shuffled back to a central server for review and matching by the NHS. Yes, it’s anonymised but it doesn’t take much to figure out who someone is if night-after-night they go back to the same address.

The programme is here but I’m not sure how long it will stay online for or if it’s available worldwide. Look at around the 7 minutes 45 seconds. There’s no mention of the central database in either the narrative or the infographics.

Sorry, NHS, I’ll not be downloading your app. BBC, stop lying by omission.

Update 4/5/20: The BBC has produced a more balanced article here.


Zoom Apologizes for Security Failures



Zoom, the company that makes the software that so many people are using now that they have to work from home, posted A Message to Our Users. In it, Zoom Founder and CEO, Eric S. Yuan, apologizes for security failures and provides details about the things they are doing to fix the problems.

Zoom starts by pointing out that usage of Zoom “ballooned overnight”. This includes over 90,000 schools across 20 countries that have taken Zoom up on their offer to help children continue their education remotely. According to Zoom, at the end of December 2019, the maximum number of daily meeting participants, both free and paid, conducted on Zoom was approximately 10 million. In March of 2020, they reached more than 200 million daily meeting participants, both free and paid.

For the past several weeks, supporting this influx of users has been a tremendous undertaking and our sole focus. We have strived to provide you with uninterrupted service and the same user-friendly experience that has made Zoom the video-conferencing platform of choice for enterprises around the world, while also ensuring platform safety, privacy, and security. However, we recognize that we have fallen short of the community’s – and our own – privacy and security expectations. For that, I am deeply sorry, and I want to share what we are doing about it.

Here is a quick look at what Zoom has done to fix things:

  •  Offering training sessions and tutorials, as well as interactive daily webinars to users. The goal is to help familiarize users with Zoom.
  •  On March 20, Zoom posted a blog post to help users address incidents of harassment (or so-called “Zoombombing”) on the platform by clarifying the protective features that can help prevent this.
  •  On March 27, Zoom took action to remove the Facebook SDK in their iOS client and have reconfigured it to prevent it from collecting unnecessary device information from Zoom users.
  •  On March 29, Zoom updated their Privacy Policy to be more clear and transparent about what data they collect and how it is used.
  •  On April 1, Zoom permanently removed the attendee attention tracker feature. They also permanently removed the LinkedIn Sales Navigator app after identifying unnecessary data disclosure by the feature.

These changes are a very good thing for Zoom to be doing. After unexpectedly gaining so many new users, the last thing the company would want to have happen is for people to leave Zoom because of their concerns about its problematic handling of privacy. It seems to me that the apology offered by Zoom Founder and CEO, Eric S. Yuan is genuine, because the company did take actions to improve Zoom for users.


The Sale of Corp.com Could Be Dangerous



Mike O’Connor bought the domain name corp.com in 1994. He is now interested in selling it, but has concerns that someone working with organized cybercriminals, or state-funded hacking groups, will buy it. If that happens, it could be devastating to corporations failed to update the name of their active directory path.

Krebs on Security has a detailed blog about exactly what the problem is. In short, the issue is a problem known as “namespace collision”. It is a situation where domain names intended to be used exclusively on an internal company network end up overlapping with domains that can resolve normally on the open Internet.

If I’m understanding this correctly, it appears that the instructions that Microsoft gave years ago were not entirely understood by people who set up a corporation’s IT system.

Krebs states that for early versions of Windows that supported Active Directory, the instructions gave the default example of a Active Directory path as “corp”. Unfortunately, many corporations quite literally named their Active Directory “corp” – and never bothered to change it to a domain name that they controlled. Then, these corporations built upon it – without renaming “corp” to something more secure.

I recommend that you read the article on Krebs on Security for full details. Tests were done to see what kind of traffic corp.com would receive. More than 375,000 Windows PCs tried to send corp.com information that it “had no business receiving”. Another test allowed corp.com to receive email, and the result of the experiment showed it was soon “raining credentials”.

The big concern right now is that “the bad guys” could buy corp.com and start harvesting the data that countless corporations unwittingly send to it.


U.S. Navy Bans TikTok from Government-Issued Mobile Devices



Reuters reported that the United States Navy banned TikTok from government-issued mobile devices because the app represented a “cybersecurity threat”.

A bulletin issued by the Navy on Tuesday showed up on a Facebook page serving military members, saying users of government=issued mobile devices who had TikTok and did not remove the app would be blocked from the Navy Marine Corps Intranet.

Reuters reported that the Navy would not provide details on what dangers TikTok presents. Pentagon spokesman Lieutenant Colonel Uriah Orland said in a statement that the order was part of an effort to “address existing and emerging threats.”

This comes after two senior members of Congress, Senate Majority Leader Charles E. Schumer (D-N.Y.) and Senator Tom Cotton (R-Ark.) asked U.S. intelligence officials to determine whether TikTok posed “national security risks”. The two Senators sent a letter to Acting Director of National Intelligence, Joseph Maquire, questioning TikTok’s data collection practices and whether the app could be used by the Chinese-owned social-networking app to limit what U.S. users could see.

Reuters reported that last month U.S. Army cadets were instructed not to use TikTok, after Senators Schumer and Cotton raised security concerns about the Army using TikTok in their recruiting.

I find this interesting because, at a glance, TikTok appears to be an app designed to encourage creativity. People make short videos that are intended to be humorous. Many people find the videos to be amusing, and they pass them around on social media.

Now, it seems that TikTok could actually be a security threat, and a strong enough one where various branches of the U.S. military are banning it from government-issued mobile devices. There appears to be concern about TikTok’s data collection practices. It is troubling that an app that appears to be lighthearted could potentially be dangerous.


Facebook and Twitter Removed Accounts Engaging in Inauthentic Behavior



Both Facebook and Twitter have announced that they have removed networks of accounts that were engaging in inauthentic behavior. The New York Times reported that the accounts used fake profile photos that were generated with artificial intelligence. The use of AI generated fake photos appears to be a new tactic.

Facebook announced that they removed two unconnected networks of accounts, Pages and groups for engaging in foreign and government interference. The first operation originated in the country of Georgia and targeted domestic audiences. Facebook removed 39 Facebook accounts, 344 Pages, 13 Groups and 22 Instagram accounts that were part of this group.

The second operation originated in Vietnam and the US and focused mainly on the US and some on Vietnam and Spanish and Chinese-speaking audiences globally. Facebook removed 610 accounts, 89 Facebook Pages, 156 Groups and 72 Instagram accounts that originated in Vietnam and the US and focused primarily on the US and some on Vietnam, Spanish and Chinese-speaking audiences globally.

Some of these accounts used profile photos generated by artificial intelligence and masqueraded as Americans to join Groups and post the BL content. To evade our enforcement, they used a combination of fake and inauthentic accounts of local individuals in the US to manage Pages and Groups. The page admins and account owners typically posted memes and other content about US political news and issues including impeachment, conservative ideology, political candidates, elections, trade, family values, and freedom of religion.

Facebook said its investigation linked this coordinated group to Epoch Media Group. The New York Times reported that Epoch Media Group is the parent company of the Falun Gong-related publication and conservative news outlet The Epoch Times. The Epoch Media Group has denied that it is linked to the network.

Twitter announced it removed 5,929 accounts for violating Twitter’s platform manipulation policies. Their investigation attributed these accounts to “a significant state-backed information operation” originating in Saudi Arabia.

The accounts represent the core portion of a larger network of more than 88,000 accounts engaged in spammy behavior across a wide range of topics. Twitter’s investigations traced the source of the coordinated activity to Smaat, a social media marketing and management company based in Saudi Arabia.

It is very important to realize that you cannot believe everything you see on social media. An account that appears to have a realistic photo could actually be one that was generated by AI. Do some fact checking before sharing things posted by accounts that are run by people you don’t know.