Changes in WhatsApp’s Privacy Policy

If you’re a WhatsApp user, pay attention to the changes in the privacy policy that you’re being forced to agree with.

In 2016, WhatsApp gave users a one-time ability to opt out of having account data turned over to Facebook. Now, an updated privacy policy is changing that. Come next month, users will no longer have that choice. Some of the data that WhatsApp collects includes:

  • User phone numbers
  • Other people’s phone numbers stored in address books
  • Profile names
  • Profile pictures and
  • Status message including when a user was last online
  • Diagnostic data collected from app logs

Under the new terms, Facebook reserves the right to share collected data with its family of companies.

Digital thought clones manipulate real-time online behavior

In The Social Dilemma, the Netflix documentary that has been in the news recently for its radical revelations, former executives at major technology companies like Facebook, Twitter, and Instagram, among others, share how their ex-employers have developed sophisticated algorithms that not only predict users’ actions but also know which content will keep them hooked on their platforms.

digital thought clones

The knowledge that technology companies are preying on their users’ digital activities without their consent and awareness is well-known. But Associate Professor Jon Truby and Clinical Assistant Professor Rafael Brown at the Centre for Law and Development at Qatar University have pulled the curtain on another element that technology companies are pursuing to the detriment of people’s lives, and investigated what we can do about it.

“We had been working on the digital thought clone paper a year before the Netflix documentary aired. So, we were not surprised to see the story revealed by the documentary, which affirm what our research has found,” says Prof Brown, one of the co-authors.

Their paper identifies “digital thought clones,” which act as digital twins that constantly collect personal data in real-time, and then predict and analyze the data to manipulate people’s decisions.

Activity from apps, social media accounts, gadgets, GPS tracking, online and offline behavior and activities, and public records are all used to formulate what they call a “digital thought clone”.

Processing personalized data to test strategies in real-time

The paper defines digital thought clone as “a personalized digital twin consisting of a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes.”

“Currently existing or future artificial intelligence (AI) algorithms can then process this personalized data to test strategies in real-time to predict, influence, and manipulate a person’s consumer or online decisions using extremely precise behavioral patterns, and determine which factors are necessary for a different decision to emerge and run all kinds of simulations before testing it in the real world,” says Prof Truby, a co-author of the study.

An example is predicting whether a person will make the effort to compare online prices for a purchase, and if they do not, charging a premium for their chosen purchase. This digital manipulation reduces a person’s ability to make choices freely.

Outside of consumer marketing, imagine if financial institutions use digital thought clones to make financial decisions, such as whether a person would repay a loan.

What if insurance companies judged medical insurance applications by predicting the likelihood of future illnesses based on diet, gym membership, the distance applicants walk in a day–based on their phone’s location history–and their social circle, as generated by their phone contacts and social media groups, and other variables?

The authors suggest that the current views on privacy, where information is treated either as a public or private matter or viewed in contextual relationships of who the information concerns and impacts, are outmoded.

A human-centered framework is needed

A human-centered framework is needed, where a person can decide from the very beginning of their relationship with digital services if their data should be protected forever or until they freely waive it. This rests on two principles: the ownership principle that data belongs to the person, and that certain data is inherently protected; and the control principle, which requires that individuals be allowed to make changes to the type of data collected and if it should be stored. In this framework, people are asked beforehand if data can be shared with an unauthorized entity.

The European Union’s landmark GDPR and the CCPA of 2018 can serve as a foundation for governments everywhere to legislate on digital thought clones and all that they entail.

But the authors also raise critical moral and legal questions over the status of these digital thought clones. “Does privacy for humans mean their digital clones are protected as well? Are users giving informed consent to companies if their terms and conditions are couched in misleading language?” asks Prof Truby.

A legal distinction must be made between the digital clone and the biological source. Whether the digital clone can be said to have attained consciousness will be relevant to the inquiry but far more important would be to determine whether the digital clone’s consciousness is the same as that of the biological source.

The world is at a crossroads: should it continue to do nothing and allow for total manipulation by the technology industry or take control through much-needed legislation to ensure that people are in charge of their digital data? It’s not quite a social dilemma.

Manipulating Systems Using Remote Lasers

Many systems are vulnerable:

Researchers at the time said that they were able to launch inaudible commands by shining lasers — from as far as 360 feet — at the microphones on various popular voice assistants, including Amazon Alexa, Apple Siri, Facebook Portal, and Google Assistant.

[…]

They broadened their research to show how light can be used to manipulate a wider range of digital assistants — including Amazon Echo 3 — but also sensing systems found in medical devices, autonomous vehicles, industrial systems and even space systems.

The researchers also delved into how the ecosystem of devices connected to voice-activated assistants — such as smart-locks, home switches and even cars — also fail under common security vulnerabilities that can make these attacks even more dangerous. The paper shows how using a digital assistant as the gateway can allow attackers to take control of other devices in the home: Once an attacker takes control of a digital assistant, he or she can have the run of any device connected to it that also responds to voice commands. Indeed, these attacks can get even more interesting if these devices are connected to other aspects of the smart home, such as smart door locks, garage doors, computers and even people’s cars, they said.

Another article. The researchers will present their findings at Black Hat Europe — which, of course, will be happening virtually — on December 10.

Holiday gifts getting smarter, but creepier when it comes to privacy and security

A Hamilton Beach Smart Coffee Maker that could eavesdrop, an Amazon Halo fitness tracker that measures the tone of your voice, and a robot-building kit that puts your kid’s privacy at risk are among the 37 creepiest holiday gifts of 2020 according to Mozilla.

holiday gifts privacy

Researchers reviewed 136 popular connected gifts available for purchase in the United States across seven categories: toys & games; smart home; entertainment; wearables; health & exercise; pets; and home office.

They combed through privacy policies, pored over product and app features, and quizzed companies in order to answer questions like: Can this product’s camera, microphone, or GPS snoop on me? What data does the device collect and where does it go? What is the company’s known track record for protecting users’ data?”

The guide includes a “Best Of” category, which singles out products that get privacy and security right, while a “Privacy Not Included” warning icon alerts consumers when a product has especially problematic privacy practices.

Meeting minimum security standards

It also identifies which products meet Mozilla’s Minimum Security Standards, such as using encryption and requiring users to change the default password if a password is needed. For the first time, Mozilla also notes which products use AI to make decisions about consumers.

“Holiday gifts are getting ‘smarter’ each year: from watches that collect more and more health data, to drones with GPS, to home security cameras connected to the cloud,” said Ashley Boyd, Mozilla’s Vice President of Advocacy.

“Unfortunately, these gifts are often getting creepier, too. Poor security standards and privacy practices can mean that your connected gift isn’t bringing joy, but rather prying eyes and security vulnerabilities.”

Boyd added: “Privacy Not Included helps consumers prioritize privacy and security when shopping. The guide also keeps companies on their toes, calling out privacy flaws and applauding privacy features.”

What are the products?

37 products were branded with a “Privacy Not Included” warning label including: Amazon Halo, Dyson Pure Cool, Facebook Portal, Hamilton Beach Smart Coffee Maker, Livescribe Smartpens, NordicTrack T Series Treadmills, Oculus Quest 2 VR Sets, Schlage Encode Smart WiFi Deadbolt, Whistle Go Dog Trackers, Ubtech Jimu Robot Kits, Roku Streaming Sticks, and The Mirror

22 products were awarded “Best Of” for exceptional privacy and security practices, including: Apple Homepod, Apple iPad, Apple TV 4K, Apple Watch 6, Apple Air Pods & Air Pods Pro, Arlo Security Cams, Arlo Video Doorbell, Eufy Security Cams, Eufy Video Doorbell, iRobot Roomba i Series, iRobot Roomba s Series, Garmin Forerunner Series, Garmin Venu watch, Garmin Index Smart Scale, Garmin Vivo Series, Jabra Elite Active 85T, Kano Coding Kits, Withings Thermo, Withings Body Smart Scales, Petcube Play 2 & Bites 2, Sonos SL One, and Findster Duo+ GPS pet tracker

A handful of leading brands, like Apple, Garmin, and Eufy, are excelling at improving privacy across their product lines, while other top companies, like Amazon, Huawei, and Roku, are consistently failing to protect consumers.

Apple products don’t share or sell your data. They take special care to make sure your Siri requests aren’t associated with you. And after facing backlash in 2019, Apple doesn’t automatically opt-in users to human voice review.

Eufy Security Cameras are especially trustworthy. Footage is stored locally rather than in the cloud, and is protected by military-grade encryption. Further, Eufy doesn’t sell their customer lists.

Roku is a privacy nightmare. The company tracks just about everything you do — and then shares it widely. Roku shares your personal data with advertisers and other third parties, it targets you with ads, it builds profiles about you, and more.

Amazon’s Halo Fitness Tracker is especially troubling. It’s packed full of sensors and microphones. It uses machine learning to measure the tone, energy, and positivity of your voice. And it asks you to take pictures of yourself in your underwear so it can track your body fat.

Tech companies want a monopoly on your smart products

Big companies like Amazon and Google are offering a family of networked devices, pushing consumers to buy into one company. For instance: Nest users now have to migrate over to a Google-only platform. Google is acquiring Fitbit.

And Amazon recently announced it’s moving into the wearable technology space. These companies realize that the more data they have on people’s lives, the more lucrative their products can be.

Products are getting creepier, even as they get more secure

Many companies — especially big ones like Google and Facebook — are improving security. But that doesn’t mean those products aren’t invasive. Smart speakers, watches, and other devices are reaching farther into our lives, monitoring our homes, bodies, and travel. And often, consumers don’t have insight or control over the data that’s collected.

Connected toys and pet products are particularly creepy. Amazon’s KidKraft Kitchen & Market is made for kids as young as three — but there’s no transparency into what data it collects. Meanwhile, devices like the Dogness iPet Robot put a mobile, internet-connected camera and microphone in your house — without using encryption.

The pandemic is reshaping some data sharing for the better. Products like the Oura Ring and Kinsa smart thermometer can share anonymized data with researchers and scientists to help track public health and coronavirus outbreaks. This is a positive development — data sharing for the public interest, not just profit.

Study shows which messengers leak your data, drain your battery, and more

Stock photo of man using smartphone.

Link previews are a ubiquitous feature found in just about every chat and messaging app, and with good reason. They make online conversations easier by providing images and text associated with the file that’s being linked.

Unfortunately, they can also leak our sensitive data, consume our limited bandwidth, drain our batteries, and, in one case, expose links in chats that are supposed to be end-to-end encrypted. Among the worst offenders, according to research published on Monday, were messengers from Facebook, Instagram, LinkedIn, and Line. More about that shortly. First a brief discussion of previews.

When a sender includes a link in a message, the app will display the conversation along with text (usually a headline) and images that accompany the link. It usually looks something like this:

For this to happen, the app itself—or a proxy designated by the app—has to visit the link, open the file there, and survey what’s in it. This can open users to attacks. The most severe are those that can download malware. Other forms of malice might be forcing an app to download files so big they cause the app to crash, drain batteries, or consume limited amounts of bandwidth. And in the event the link leads to private materials—say, a tax return posted to a private OneDrive or DropBox account—the app server has an opportunity to view and store it indefinitely.

The researchers behind Monday’s report, Talal Haj Bakry and Tommy Mysk, found that Facebook Messenger and Instagram were the worst offenders. As the chart below shows, both apps download and copy a linked file in its entirety—even if it’s gigabytes in size. Again, this may be a concern if the file is something the users want to keep private.

Link Previews: Instagram servers download any link sent in Direct Messages even if it’s 2.6GB.

It’s also problematic because the apps can consume vast amounts of bandwidth and battery reserves. Both apps also run any JavaScript contained in the link. That’s a problem because users have no way of vetting the security of JavaScript and can’t expect messengers to have the same exploit protections modern browsers have.

Link Previews: How hackers can run any JavaScript code on Instagram servers.

Haj Bakry and Mysk reported their findings to Facebook, and the company said that both apps work as intended. LinkedIn performed only slightly better. Its only difference was that, rather than copying files of any size, it copied only the first 50 megabytes.

Meanwhile, when the Line app opens an encrypted message and finds a link, it appears to send the link to the Line server to generate a preview. “We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who’s sharing which links to whom,” Haj Bakry and Mysk wrote.

Discord, Google Hangouts, Slack, Twitter, and Zoom also copy files, but they cap the amount of data at anywhere from 15MB to 50MB. The chart below provides a comparison of each app in the study.

Talal Haj Bakry and Tommy Mysk

All in all, the study is good news because it shows that most messaging apps are doing things right. For instance, Signal, Threema, TikTok, and WeChat all give the users the option of receiving no link preview. For truly sensitive messages and users who want as much privacy as possible, this is the best setting. Even when previews are provided, these apps are using relatively safe means to render them.

Still, Monday’s post is a good reminder that private messages aren’t always, well, private.

“Whenever you’re building a new feature, always keep in mind what sort of privacy and security implications it may have, especially if this feature is going to be used by thousands or even millions of people around the world,” the researchers wrote. “Link previews are a nice feature that users generally benefit from, but here we’ve showcased the wide range of problems this feature can have when privacy and security concerns aren’t carefully considered.”

Facebook open-sources a static analyzer for Python code

Need a tool to check your Python-based applications for security issues? Facebook has open-sourced Pysa (Python Static Analyzer), a tool that looks at how data flows through the code and helps developers prevent data flowing into places it shouldn’t.

Python Static Analyzer

How the Python Static Analyzer works

Pysa is a security-focused tool built on top of Pyre, Facebook’s performant type checker for Python.

“Pysa tracks flows of data through a program. The user defines sources (places where important data originates) as well as sinks (places where the data from the source shouldn’t end up),” Facebook security engineer Graham Bleaney and software engineer Sinan Cepel explained.

“Pysa performs iterative rounds of analysis to build summaries to determine which functions return data from a source and which functions have parameters that eventually reach a sink. If Pysa finds that a source eventually connects to a sink, it reports an issue.”

It’s used internally by Facebook to check the (Python) code that powers Instagram’s servers, and do so quickly. It’s used to check developer’s proposed code change for security and privacy issues and to prevent them being introduced in the codebase, as well as to detect existing issues in a codebase.

The found issues are flagged and, depending on their type, the report is send either to the developer or to security engineers to check it out.

You can get Pysa from here and you can use a number of already developed definitions to help it find security issues.

“Because we use open source Python server frameworks such as Django and Tornado for our own products, Pysa can start finding security issues in projects using these frameworks from the first run. Using Pysa for frameworks we don’t already have coverage for is generally as simple as adding a few lines of configuration to tell Pysa where data enters the server,” the two engineers added.

The tool’s limitations and stumbling blocks

Pysa can’t detect all security or privacy issues, just data flow–related security issues. What’s more, it can’t detect all data flow–related issues because the Python programming language is very flexible and dynamic (allows code imports, change function call actions, etc.)

Finally, those who use it have make a choice about how many false positives and negatives they will tolerate.

“Because of the importance of catching security issues, we built Pysa to avoid false negatives and catch as many issues as possible. Reducing false negatives, however, may require trade-offs that increase false positives. Too many false positives could in turn cause alert fatigue and risk real issues being missed in the noise,” the engineers explained.

The number of false positives can reduced by using sanitizers and manually added and automatic features.

New technique keeps your online photos safe from facial recognition algorithms

In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the internet.

photos online algorithms

An AI algorithm will identify a cat in the picture on the left but will not detect a cat in the picture on the right

Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one’s photos together via the people present in those photos using Google’s own image recognition technology.

Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines.

Safeguarding sensitive information in photos

Led by Professor Mohan Kankanhalli, Dean of the School of Computing at the National University of Singapore (NUS), the research team from the School’s Department of Computer Science has developed a technique that safeguards sensitive information in photos by making subtle changes that are almost imperceptible to humans but render selected features undetectable by known algorithms.

Visual distortion using currently available technologies will ruin the aesthetics of the photograph as the image needs to be heavily altered to fool the machines. To overcome this limitation, the research team developed a “human sensitivity map” that quantifies how humans react to visual distortion in different parts of an image across a wide variety of scenes.

The development process started with a study involving 234 participants and a set of 860 images. Participants were shown two copies of the same image and they had to pick out the copy that was visually distorted.

After analysing the results, the research team discovered that human sensitivity is influenced by multiple factors. These factors included things like illumination, texture, object sentiment and semantics.

Applying visual distortion with minimal disruption

Using this “human sensitivity map” the team fine-tuned their technique to apply visual distortion with minimal disruption to the image aesthetics by injecting them into areas with low human sensitivity.

The NUS team took six months of research to develop this novel technique.

“It is too late to stop people from posting photos on social media in the interest of digital privacy. However, the reliance on AI is something we can target as the threat from human stalkers pales in comparison to the might of machines. Our solution enables the best of both worlds as users can still post their photos online safe from the prying eye of an algorithm,” said Prof Kankanhalli.

End users can use this technology to help mask vital attributes on their photos before posting them online and there is also the possibility of social media platforms integrating this into their system by default. This will introduce an additional layer of privacy protection and peace of mind.

The team also plans to extend this technology to videos, which is another prominent type of media frequently shared on social media platforms.

How people deal with fake news or misinformation in their social media feeds

Social media platforms, such as Facebook and Twitter, provide people with a lot of information, but it’s getting harder and harder to tell what’s real and what’s not.

deal with fake news

Participants had various reactions to encountering a fake post

Researchers at the University of Washington wanted to know how people investigated potentially suspicious posts on their own feeds. The team watched 25 participants scroll through their Facebook or Twitter feeds while, unbeknownst to them, a Google Chrome extension randomly added debunked content on top of some of the real posts.

Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it.

The research

“We wanted to understand what people do when they encounter fake news or misinformation in their feeds. Do they notice it? What do they do about it?” said senior author Franziska Roesner, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering.

“There are a lot of people who are trying to be good consumers of information and they’re struggling. If we can understand what these people are doing, we might be able to design tools that can help them.”

Previous research on how people interact with misinformation asked participants to examine content from a researcher-created account, not from someone they chose to follow.

“That might make people automatically suspicious,” said lead author Christine Geeng, a UW doctoral student in the Allen School. “We made sure that all the posts looked like they came from people that our participants followed.”

The researchers recruited participants ages 18 to 74 from across the Seattle area, explaining that the team was interested in seeing how people use social media. Participants used Twitter or Facebook at least once a week and often used the social media platforms on a laptop.

Then the team developed a Chrome extension that would randomly add fake posts or memes that had been debunked by the fact-checking website Snopes.com on top of real posts to make it temporarily appear they were being shared by people on participants’ feeds. So instead of seeing a cousin’s post about a recent vacation, a participant would see their cousin share one of the fake stories instead.

The researchers either installed the extension on the participant’s laptop or the participant logged into their accounts on the researcher’s laptop, which had the extension enabled.

The team told the participants that the extension would modify their feeds – the researchers did not say how – and would track their likes and shares during the study – though, in fact, it wasn’t tracking anything. The extension was removed from participants’ laptops at the end of the study.

“We’d have them scroll through their feeds with the extension active,” Geeng said. “I told them to think aloud about what they were doing or what they would do if they were in a situation without me in the room. So then people would talk about ‘Oh yeah, I would read this article,’ or ‘I would skip this.’ Sometimes I would ask questions like, ‘Why are you skipping this? Why would you like that?’”

Participants could not actually like or share the fake posts. A retweet would share the real content beneath the fake post. The one time a participant did retweet content under the fake post, the researchers helped them undo it after the study was over. On Facebook, the like and share buttons didn’t work at all.

The results

After the participants encountered all the fake posts – nine for Facebook and seven for Twitter – the researchers stopped the study and explained what was going on.

“It wasn’t like we said, ‘Hey, there were some fake posts in there.’ We said, ‘It’s hard to spot misinformation. Here were all the fake posts you just saw. These were fake, and your friends did not really post them,’” Geeng said.

“Our goal was not to trick participants or to make them feel exposed. We wanted to normalize the difficulty of determining what’s fake and what’s not.”

The researchers concluded the interview by asking participants to share what types of strategies they use to detect misinformation.

In general, the researchers found that participants ignored many posts, especially those they deemed too long, overly political or not relevant to them.

But certain types of posts made participants skeptical. For example, people noticed when a post didn’t match someone’s usual content. Sometimes participants investigated suspicious posts – by looking at who posted it, evaluating the content’s source or reading the comments below the post – and other times, people just scrolled past them.

“I am interested in the times that people are skeptical but then choose not to investigate. Do they still incorporate it into their worldviews somehow?” Roesner said.

“At the time someone might say, ‘That’s an ad. I’m going to ignore it.’ But then later do they remember something about the content, and forget that it was from an ad they skipped? That’s something we’re trying to study more now.”

While this study was small, it does provide a framework for how people react to misinformation on social media, the team said. Now researchers can use this as a starting point to seek interventions to help people resist misinformation in their feeds.

“Participants had these strong models of what their feeds and the people in their social network were normally like. They noticed when it was weird. And that surprised me a little,” Roesner said.

“It’s easy to say we need to build these social media platforms so that people don’t get confused by fake posts. But I think there are opportunities for designers to incorporate people and their understanding of their own networks to design better social media platforms.”

Three API security risks in the wake of the Facebook breach

Facebook recently pledged to improve its security following a lawsuit that resulted from a 2018 data breach. The breach, which was left open for more than 20 months, resulted in the theft of 30 million authentication tokens and almost as much personally identifiable information. A “View As” feature that enabled developers to render user pages also let attackers obtain the user’s access token.

The theft of access token represents a major API security risk moving forward, but also highlights how API risks can remain undetected for so long. Of course, Facebook is not unique in this risk. As Microsoft CEO Satya Nadella quipped, “all companies are software companies.”

Digital transformation and cloud migration trends have accelerated an agile development cycle known as continuous integration and continuous deployment/delivery (CI/CD), which enables DevOps to constantly push new updates–like that Facebook app in your pocket.

Yet even as the industry embraces this new software model, much of the security has been commodified by infrastructure providers like Amazon and Microsoft, including container protection, authorization, and data encryption. Likewise, the security functionality of first generation gateways and firewalls, such as DDoS protection and bot mitigation has also been consumed into infrastructure.

However, as this first generation of infrastructure is more or less as good as it gets, it suggests a deeper risk to the underlying application transportation layer, its APIs. The reason that APIs are so powerful as a communication tool is the same reason that they are so vulnerable, APIs have great flexibility in their parameters. As such, they exist in everything from so called “single-page” web applications to mobile apps, and even industrial IoT systems.

Traditionally, API traffic has moved from internal to external callers (“north-south”), which is why the first generation of security has been a tolerable band-aid. However, modern application architectures are now enabling internal application-to-application communication (“east-west”), which represents a critical risk surface because of its ability to move laterally. Furthermore, there is little visibility into this traffic.

API risk is rooted in a lack of visibility, not only into its traffic, but also into its flexible and powerful parameters, known as API specifications—or “specs.” DevOps and SecOps attempt to mitigate this risk by creating and maintaining API catalogs, which are a collection of its specs. But, the reality is that this is a highly manual process in a constantly changing environment. Keeping it up-to-date is easier said than done.

OWASP has introduced its API Security Top 10 to help make sense of this new API risk surface, which is a helpful starting point for a discussion of API risk. We can further simplify API risk into three common categories categories:

1. Unknown or outdated API specifications.
2. Uninspected APIs.
3. Uncontrolled third-party APIs.

Risks related to unknown or outdated API specifications include a complete absence of an API spec, a loosely-defined API spec, or an out-of-spec API call, which typically result from rapid development changes. Bad actors can exploit out-of-spec API calls by accessing customer data through undocumented “shadow” APIs or even simply elevating permissions through a parameter like “administration=yes.”

The risks related to uninspected APIs include launching lateral attacks through compromised servers, encrypted traffic remaining uninspected and API parameters set out of critical range—such as sabotaging an industrial IoT device by setting its temperature high enough to break down. Perhaps the most common of these risks is to miss validating the login session against its parameter. These sorts of mismatches can be the source of severe data breaches, as back-end services are unable to validate the credentials exfiltrating data.

Finally, the risks related to uncontrolled third-party APIs demonstrate that the use of APIs is not limited to incoming calls from enterprise users and partners, but also outgoing calls from business applications to external services. These outgoing calls can be abused to exfiltrate data, such as exposed storage server APIs. Alternatively, public API calls may use compromised credentials to access enterprise services to exfiltrate data. Unlike private data centers, it may not be possible to turn off these public APIs.

No matter how you slice it, the source of these API risks is a lack of visibility, both into their traffic and into their parameters. Next-generation API security solutions offer the promise of automatically discovering and continuously maintaining API catalogs, for further monitoring and alerting. For those that do maintain an up-to-date API catalog, there is benefit not only to security, but also to improving quality assurance and debugging across the DevOps process.

As DevOps moves from development to test environments, and ultimately into production, an API catalog can be used to compare and contrast areas of improvement. In this regard, it is imperative not only for CISOs to gain visibility into their APIs, but also for CIOs. In this way, they will not only secure their APIs, but also accelerate their digital transformation.

The 25 most impersonated brands in phishing attacks

PayPal remains the top brand impersonated in phishing attacks for the second quarter in a row, with Facebook taking the #2 spot and Microsoft coming in third, according to Vade Secure.

brand phishing attacks

Leveraging data from more than 600 million protected mailboxes worldwide, Vade’s machine learning algorithms identify the brands being impersonated as part of its real-time analysis of the URL and page content.

PayPal reigns supreme, again

For the second straight quarter, PayPal was the most impersonated brand in phishing attacks. While PayPal phishing was down 31% compared to Q3, the volume was up 23% year over year. With a daily average of 124 unique URLs, PayPal phishing is a prevalent threat targeting both consumers and SMB employees.

Illegitimate notes and file sharing keep Microsoft phishing in the spotlight

Microsoft remained the primary corporate target in Q4, coming in at #3 on this quarter’s Phishers’ Favorites list. With 200 million active business users and counting, Office 365 continues to be the primary driver for Microsoft phishing.

Cybercriminals seek O365 credentials in order to access sensitive corporate information and use compromised accounts to launch targeted spear phishing attacks on other employees or partners.

In Q4, large volumes of file-sharing phishing were still seen, including fake OneDrive/SharePoint notifications leading directly to a phishing page and legitimate notifications leading to files containing phishing URLs. There’s also the emergence of note phishing impersonating services like OneNote and Evernote.

While the campaigns are similar, the key difference is that OneNote or Evernote notes are not files, but rather HTML pages. Thus, the same technology that is used by email security vendors to scan the contents of files doesn’t work with HTML pages, which means these emails have a higher likelihood of reaching users’ inboxes.

Cybercriminals target your money, but impersonate smaller banks

For the second quarter, financial services companies accounted for the most brands and most URLs in the Phishers’ Favorites report. A difference in Q4, however, is that there was a shift towards phishing customers of smaller banks.

One reason for this could be that while large banks have invested in building out security operations centers, incident response and takedown procedures to limit phishing campaigns impersonating their brand, smaller banks may not have the same level of controls in place.

brand phishing attacks

Additional key findings

  • Netflix (#4), WhatsApp (#5), Bank of America (#6), CIBC (#7), Desjardins (#8), Apple (#9) and Amazon (#10) rounded out the top 10 most impersonated brands.
  • Despite having only three brands in the top 25, social media increased its share of phishing URLs from 13.1% in Q3 to 24.1% in Q4 2019. This growth was driven by WhatsApp, which shot up 63 spots to #5, and Instagram, which rose 16 spots to #13.
  • Netflix phishing had been a model of consistency, growing for six consecutive quarters, but that trend reversed abruptly in Q4, with a 50.2% drop in unique phishing URLs. In fact, the 6,758 Netflix phishing URLs detected in Q4 was the lowest total since Q2 2018.
  • For the first time in Phishers’ Favorites history, Friday was the top day overall for phishing emails, followed closely by Thursday. Tuesday, Wednesday and Monday took the middle three spots. As usual, Saturday and Sunday were at the bottom.

“When it comes to phishing in particular and cyberattacks in general, change is the only constant,” said Adrien Gendre, Chief Solution Architect at Vade Secure.

“Threats are evolving rapidly and they are becoming more and more credible to end users. This underscores the need for a comprehensive approach to email security combining threat detection, post-delivery remediation and on-the-fly user training as the last line of defense.”

Most impersonated brands in phishing attacks

The complete list of the 25 most impersonated brands in phishing attacks compiled by Vade Secure is available below:

brand phishing attacks

Facebook users will be notified when their credentials are used for third-party app logins

Facebook will (finally!) explicitly tell users who use Facebook Login to log into third-party apps what information those apps are harvesting from their FB account.

Facebook Login third-party apps

At the same time, users will be able to react quickly if someone managed to compromise their Facebook accounts and is using their credentials to access other apps and websites.

Login Notifications

The new feature, called Login Notifications, will deliver notifications to users via the Facebook app and user’s associated email.

The sending of those notifications will be triggered every time a user (or attacker):

  • Logs into a third-party app with Facebook Login and grants the app access to their information
  • Re-uses Facebook Login to log into a third-party app after an app’s access to information has expired.

As you can see in the image above, each notification will include a list of the information the app/website pulls from the Facebook account to personalize the user’s experience, as well as offer a direct link to Facebook Settings > Apps and Websites, so users can limit the information shared with the app/service or remove the app altogether.

Privacy push

“The design and content of the Login Notifications remind users that they have full control over the information they share with 3rd party apps, with a clear path to edit those settings,” Puxuan Qi, a software engineer at Facebook, explained.

“We will continue to test additional user control features in early 2020, including bringing permissions to the forefront of the user experience when logging into a 3rd party app with Facebook Login.”

This new feature is part of Facebook’s broader attempt to show they care about user privacy and minimize the fallout of incidents such as the massive 2018 Facebook data breach (when attackers managed to steal access tokens of at least 50 million users, potentially allowing them to take over victims’ Facebook accounts and log into accounts the victims opened on third-party websites and apps by using Facebook Login) and the Cambridge Analytica scandal (CA used information collected through third-party apps without users agreeing to their data being used to fuel election campaigns or even knowing about it).

Senate Judiciary committee interrogates Apple, Facebook about crypto

A serious man in a suit speaks during a senate hearing.

Enlarge / Lindsay Graham doesn’t want people reading his texts. But he’ll make darned sure there are backdoors for law enforcement into encrypted texts and devices, and he will pass a law if he needs to.

In a hearing of the Senate Judiciary Committee yesterday, while their counterparts in the House were busy with articles of impeachment, senators questioned New York District Attorney Cyrus Vance, University of Texas Professor Matt Tait, and experts from Apple and Facebook over the issue of gaining legal access to data in encrypted devices and messages. And committee chairman Sen. Lindsay Graham (R-S.C.) warned the representatives of the tech companies, “You’re gonna find a way to do this or we’re going to do it for you.”

The hearing, entitled “Encryption and Lawful Access: Evaluating Benefits and Risks to Public Safety and Privacy,” was very heavy on the public safety with a few passing words about privacy. Graham said that he appreciated “the fact that people cannot hack into my phone, listen to my phone calls, follow the messages, the texts that I receive. I think all of us want devices that protect our privacy.” However, he said, “no American should want a device that is a safe haven for criminality,” citing “encrypted apps that child molesters use” as an example.

“When they get a warrant or court order, I want the government to be able to look and find all relevant information,” Graham declared. “In American law there is no place that’s immune from inquiry if criminality is involved… I’m not about to create a safe haven for criminals where they can plan their misdeeds and store information in a place that law enforcement can never access it.”

Graham and ranking member Sen. Diane Feinstein (D-Calif.)—who referenced throughout the hearing the 2015 San Bernardino mass shooting and the confrontation between Apple and the Federal Bureau of Investigation that resulted from mishandling of the shooter’s county-owned iCloud account by administrators directed by the FBI—closed ranks on the issue.

“Everyone agrees that having the ability to safeguard our personal data is important,” Feinstein said. “At the same time, we’ve seen criminals increasingly use technology, including encryption, in an effort to evade prosecution. We cannot let that happen. It is important that all criminals, whether foreign or domestic, be brought to justice.”

Vance, for his part, called Apple’s and Google’s introduction of device encryption “the single most important challenge to law enforcement over the last 10 years… Apple and Google upended centuries of American jurisprudence.” He cited a human trafficking case he could not get evidence for because of encryption, recounting how the suspect in jail told a cellmate that Apple’s encryption was “a gift from God” to him.

That isn’t how any of this works

Vance has been a frequent and long advocate for federal legislation to ensure legal, extraordinary access to data. “I’m not sure state and local law enforcement are going to be able to bridge the gap with technology without congressional intervention,” Vance told the committee in a response to a question from Sen. Feinstein. Explaining that his office’s lab gets about 1,600 devices a year as part of case evidence, Vance said, “About 82 percent are locked—it was 60 percent four years ago,” he said. “About half of those are Apple devices. Using technology, we’re able to unlock about half of the devices—so there are about 300 to 400 phones [a year] that we can’t access with the technology we have. There are many, many serious cases where we can’t access the device in the time period where it is most important.”

Feinstein then told the other witnesses, “You heard a very prominent district attorney from New York explain what the situation is… I’d like to have your response on what you’re going to do about it. That will determine the degree to which we do something about it.”

Apple Manager of User Privacy Erik Neuenschwander responded that Apple will continue to work with law enforcement, citing the 127,000 requests from law enforcement for assistance Apple’s team—which includes former law enforcement officials—has responded to over the past seven years, in addition to thousands of emergency requests that Apple has responded to usually within 20 minutes. “We’re going to continue to work with law enforcement as we have to find ways through this,” Neuenschwander said. “We have a team of dedicated professionals that is working on a daily basis with law enforcement.”

Feinstein interrupted Neuenschwander: “My understanding is that even a court order won’t convince you to open the device.”

Neuenschwander replied, “I don’t think it’s a matter of convincing or a court order. It’s the fact that we don’t have the capability today to give the data off the device to law enforcement.” There had been conversations about making changes to fix that, Neuenschwander said, “But ultimately we believe strong encryption makes us all safer, and we haven’t found a way to provide access to users’ devices that wouldn’t weaken security for everyone.”

Vance said in response that Apple should re-engineer its phones to allow access. “What they created, they can fix,” he said.

Social media platforms leave 95% of reported fake accounts up, study finds

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

Enlarge / One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

It’s no secret that every major social media platform is chock-full of bad actors, fake accounts, and bots. The big companies continually pledge to do a better job weeding out organized networks of fake accounts, but a new report confirms what many of us have long suspected: they’re pretty terrible at doing so.

The report comes this week from researchers with the NATO Strategic Communication Centre of Excellence (StratCom). Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.

The research team spent €300 (about $332) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report (PDF) explains. That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers. They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.

About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 percent were still active. So they reported a sample selection of those accounts to the platforms as fraudulent. Then came the most damning statistic: three weeks after being reported as fake, 95 percent of the fake accounts were still active.

“Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behavior on their platforms,” the researchers concluded. “Self-regulation is not working.”

Too big to govern

The social media platforms are fighting a distinctly uphill battle. The scale of Facebook’s challenge, in particular, is enormous. The company boasts 2.2 billion daily users of its combined platforms. Broken down by platform, the original big blue Facebook app has about 2.45 billion monthly active users, and Instagram has more than one billion.

Facebook frequently posts status updates about “removing coordinated inauthentic behavior” from its services. Each of those updates, however, tends to snag between a few dozen and a few hundred accounts, pages, and groups, usually sponsored by foreign actors. That’s barely a drop in the bucket just compared to the 19,000 fake accounts that one research study uncovered from one $300 outlay, let alone the vast ocean of other fake accounts out there in the world.

The issue, however, is both serious and pressing. A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking. But attempted foreign interference in both a crucial national election on the horizon in the UK this month and the high-stakes US federal election next year is all but guaranteed.

The Senate Intelligence Committee’s report (PDF) on social media interference in the 2016 US election is expansive and thorough. The committee determined Russia’s Internet Research Agency (IRA) used social media to “conduct an information warfare campaign designed to spread disinformation and societal division in the United States,” including targeted ads, fake news articles, and other tactics. The IRA used and uses several different platforms, the committee found, but its primary vectors are Facebook and Instagram.

Facebook has promised to crack down hard on coordinated inauthentic behavior heading into the 2020 US election, but its challenges with content moderation are by now legendary. Working conditions for the company’s legions of contract content moderators are terrible, as repeatedly reported—and it’s hard to imagine the number of humans you’d need to review literally trillions of pieces of content posted every day. Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

You can migrate your photos from Facebook to Google next year

A wall of user photos form a Facebook logo at the company's data center in Lulea, Sweden.

Enlarge / A wall of user photos form a Facebook logo at the company’s data center in Lulea, Sweden.
JONATHAN NACKSTRAND | AFP | Getty Images

If you feel like you don’t want to spend much time on Facebook anymore but don’t want to lose up to 15 years’ worth of shared photos, good news: the company is rolling out a tool that will let you export your image library directly to Google Photos.

Facebook announced the new tool in a corporate blog post today. The initiative springs from the Data Transfer Project, a collaboration among Apple, Facebook, Google, Microsoft, and Twitter to make some data transferable among those platforms.

The pilot begins this week in Ireland (where Facebook has been under investigation for alleged violations of EU data privacy law). The tool is then expected to roll out to users in the rest of the world sometime in the first half of 2020. Users who have the option available to them will see the tool in Facebook’s settings under Your Facebook Information.

If you’re not keen on putting your images into Google Photos either, there will in theory be other platforms made available in the future. For now, Facebook did not specify what the platforms are, nor did the company provide a timetable.

Ahead of the law

Facebook’s gesture is not borne of pure magnanimity. A bevy of probes and investigations, together with enforcement of new laws, are likely forcing the company’s hand.

You can make the argument that a company hoarding consumers’ data for itself, and refusing to release it to a competitor if the consumer asks, is in violation of antitrust law. Facebook currently is under antitrust investigation by basically every US regulator that has antitrust authority, both state and federal, and may well prefer to avoid antagonizing those regulators at this time.

Lawmakers worldwide are also forcing the issue. Europe’s General Data Protection Regulation (GDPR), which went into effect in 2018, includes a “right to data portability” for consumers. Here in the States, the California Consumer Privacy Act also provides for a right to data portability. That law goes into effect next month.

Federal legislators are also trying to address the issue. A bipartisan group of senators, including Richard Blumenthal (D-Conn.), Josh Hawley (R-Mo.), and Mark Warner (D-Va.), in October proposed a bill that would require US firms to make user data portable. Additionally, two other proposed sweeping privacy and data reform bills—one in the House and one in the Senate—both include explicit rights to data portability.

Data-Enriched Profiles on 1.2B People Exposed in Gigantic Leak

The administrator of your personal data will be Threatpost, Inc., 500 Unicorn Park, Woburn, MA 01801. Detailed information on the processing of personal data can be found in the privacy policy. In addition, you will find them in the message confirming the subscription to the newsletter.