Apple has released iOS 14, with a bucketload of new and improved functional features and a handful of privacy and security ones.
New privacy and security features in iOS 14
The new iOS will tell you when an app is using your camera or microphone
It will show an indicator dot (green for when camera or camera+microphone is in use, orange for microphone) in the top right part of the device’s screen.
The downside is that it’s fairly small and you might miss it if other things are happening on the screen. The upside is that you can check which app most recently used your camera or microphone via the Control Center.
Of course, you can deny access to your camera and microphone to any app through the Privacy settings.
You can share with apps your approximate location instead of the precise one
Go to Settings > Privacy and Location Services > Location Services, and you can configure for each app whether you want it to access your device’s location “only while the app is in use”, “always”, “never”, or you want the app to ask you for permission each time you run it (then you get the option to give it permission to access your location “Only once”).
When you allow location access for an app, you’ll get the option to provide your precise location or leave it to the app to determine your approximate location (the latter is good enough for apps that show local news or weather).
You can choose to share with apps just some photos
Under Privacy > Photos you can see which apps have requested access to your photos and you can choose to restrict each app’s access just to selected photos or photo albums (or none).
You can limit tracking
Each time you connect to a Wi-Fi network your phone will show a different MAC address. This is to prevent ISPs and advertisers to track your movements (i.e., see when and where you connect to a network), and this option is on by default.
In Settings > Privacy > Tracking, you can choose to not allow apps to send you a request to track you. If you do that, “any app that attempts to ask you for your permission will be blocked from asking and automatically informed that you have requested not to be tracked. In addition, all apps, other than those that you have previously given permission to track, will be blocked from accessing the device’s Advertising Identifier.”
If you allow tracking, tracking permissions can also be controlled on a per-app basis.
It has to be pointed out, though, that these app tracking options will start working as intended in early 2021, when these privacy controls become mandatory for developers.
“We want to give developers the time they need to make the necessary changes, and as a result, the requirement to use this tracking permission will go into effect early next year,” Apple explained.
Facebook complained earlier this year that these new privacy requirements would have a significant negative impact on its advertising business.
You will be able to see a summary of an app’s privacy practices before you download it from the App Store
You still can’t see these because app developers have yet to roll them out, but when they are ready, you’ll be able to peruse these summaries through a “App Privacy” button on the listing in the store, and they will look something like this:
You’ll be able to see which tracking cookies have been blocked
The Safari mobile browser has been updated to show a Privacy Report, which shows all the cross-site tracking cookies it has blocked in the last 30 days if you turned on Prevent Cross-Site Tracking in Safari’s Privacy and Security Settings.
The report is accessible from the AA menu in the browser’s address bar.
You’ll be notified if a password you stored in the iCloud Keychain has been spotted in a known data breach
To turn this option on, go to Settings > Passwords > Security Recommendations and toggle on Detect Compromised Passwords. For the secure password monitoring to work, iCloud Keychain has to be enabled.
In iOS 14, Apple has also fixed a number of security vulnerabilities, including:
- A vulnerability in an integrated drive electronics (IDE) component that could allow a remote authenticated attacker to execute arbitrary code on a paired device during a debug session over the network (CVE-2020-9992), and a
- A logic issue affecting the sandbox that may allow a malicious application to access restricted files (CVE-2020-9968)
Apple has released Safari 14, which features many functional improvements, a Privacy Report that shows all the trackers the browser has neutralized, and and does not support Adobe Flash anymore.
Safari 14 sports a redesign of the tab bar, which now displays site favicons by default and previews of the contents of some pages (when the user hovers over a tab), and a customizable start page.
It also features improved extension support, as Apple has already put things in motion to allow app developers to easily convert their existing extension into a Safari web extension or build a new one, and support for.
But on to the Safari 14 privacy and security additions:
The Privacy Report shows the cross-site trackers that Intelligent Tracking Prevention (ITP) prevented from accessing identifying information, and how many and which trackers the visited websites sport. It also shows which entity is behind each tracker.
ITP uses on-device machine learning to identify and block the trackers, and known trackers are independently verified by DuchDuckGo. Safari blocks trackers only if the “Prevent cross-site tracking” option is turned on, and the Privacy Report can only be compiled if users have turned ITP on.
The report is accessible through the “Safari” tab, via the start page, and via the shield-style icon to the left of the browser’s address bar.
Secure password monitoring
Safari 14 will notify users when one of their saved passwords in iCloud Keychain has shown up in a data breach (iCloud Keychain has to be enabled, of course).
It will also allow them to immediately change the password by pointing them to the correct page for each website (if the admin has specified the page’s URL in the web server’s .well-known directory).
Removed support for Adobe Flash for improved security
Adobe Flash has been a thorn in security-minded users’ and cybersecurity professionals’ side for many years, as its vulnerabilities were often exploited by attackers.
Three years ago, browser makers have announced that they would drop Flash support by the end of 2020, and now the time has come for the move. Adobe Flash will reach end-of-life on December 31, 2020.
Apple has fixed four WebKit vulnerabilities in Safari 14. All can be triggered by the browser processing maliciously crafted web content and three could lead to arbitrary code execution.
More information about and a PoC for the one discovered by Marcin “Icewall” Noga of Cisco Talos can be found here.
Browsing histories can be used to compile unique browsing profiles, which can be used to track users, Mozilla researchers have confirmed.
There are also many third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.
This is not the first time that researchers have demonstrated that browsing profiles are distinctive and stable enough to be used as identifiers.
Sarah Bird, Ilana Segall and Martin Lopatka were spurred to reproduce the results set forth in a 2012 paper by Lukasz Olejnik, Claude Castelluccia, and Artur Janc, by using more refined data, and they’ve extend that work to detail the privacy risk posed by the aggregation of browsing histories.
The Mozillians collected browsing data from ~52,000 Firefox for 7 calendar days, then paused for 7 days, and then resumed for an additional 7 days. After analyzing the collected data, they identified 48,919 distinct browsing profiles, of which 99% are unique. (The original paper observed a set of ~400,000 web history profiles, of which 94% were unique.)
“High uniqueness holds even when histories are truncated to just 100 top sites. We then find that for users who visited 50 or more distinct domains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains,” they noted.
The also confirmed that browsing history profiles are stable through time – a second prerequisite for these profiles being repeatedly tied to specific users/consumers and used for online tracking.
“Our reidentifiability rates in a pool of 1,766 were below 10% for 100 sites despite a >90% profile uniqueness across datasets, but increased to ~80% when we consider 10,000 sites,” they added.
Finally, some corporate entities like Alphabet (Google) and Facebook are able to observe the web to an even greater extent that when the research for the 2012 paper was conducted, which may allow them to gain deep visibility into browsing activity and use that visibility for effective online tracking – even if users use different devices to browse the internet.
Other recent research has shown that anonymization of browsing patterns/profile through generalization does not sufficiently protect users’ anonymity.
Regulation is needed
Privacy researcher Lukasz Olejnik, one of the authors of the 2012 paper, noted that the findings of this newest research are a welcome confirmation that web browsing histories are personal data that can reveal insight about the user or be used to track users.
“In some ways, browsing history resemble biometric-like data due to their uniqueness and stability,” he commented, and pointed out that, since this data allows the singling-out of individuals out of many, it automatically comes under the General Data Protection Regulation (GDPR).
“Web browsing histories are private data, and in certain contexts, they are personal data. Now the state of the art in research indicates this. Technology should follow. So too should the regulations and standards in the data processing. As well as enforcement,” he concluded.
Tracking of our browsing behavior is part of the daily routine of internet use. Companies use it to adapt ads to the personal needs of potential clients or to measure their range. Many providers of tracking services advertise secure data protection by generalizing datasets and anonymizing data in this way.
Tracking services collect large amounts of data of internet users. These data include the websites accessed, but also information on the end devices used, the time of access (timestamp) or location information.
“As these data are highly sensitive and have a high personal reference, many companies use generalization to apparently anonymize them and to bypass data security regulations,” says Professor Thorsten Strufe, Head of the “Practical IT Security” Research Group of KIT.
By means of generalization, the level of detailing of the information is reduced, such that an identification of individuals is supposed to be impossible. For example, location information is restricted to the region, the time of access is limited to the day, or the IP address is shortened by some figures.
Strufe, together with his team and colleagues of TUD, have now studied whether this method really allows no conclusions to be drawn with respect to the individual.
With the help of a large volume of metadata of German websites with 66 million users and over 2 billion page views, the computer scientists succeeded in not only drawing conclusions with respect to the websites accessed, but also with respect to the chains of page views, the so-called click traces. The data were made available by INFOnline, an institution measuring the data range in Germany.
The course of page views is of high importance
“To test the effectiveness of generalization, we analyzed two application scenarios,” Strufe says. “First, we checked all click traces for uniqueness. If a click trace, that is the course of several successive page views, can be distinguished clearly from others, it is no longer anonymous.”
It was found that information on the website accessed and the browser used has to be removed completely from the data to prevent conclusions to be drawn with respect to persons.
“The data will only become anonymous, when the sequences of single clicks are shortened, which means that they are stored without any context, or when all information, except for the timestamp, is removed,” Strufe says.
“Even if the domain, the allocation to a subject, such as politics or sports, and the time are stored on a daily basis only, 35 to 40 percent of the data can be assigned to individuals.” For this scenario, the researchers found that generalization does not correspond to the definition of anonymity.
A few observations are sufficient to identify user profiles
In addition, the researchers checked whether even subsets of a click trace allow conclusions to be drawn with respect to individuals.
“We linked the generalized information from the database to other observations, such as links shared on social media or in chats. If, for example, the time is generalized precisely to the minute, one observation is sufficient to clearly assign 20 percent of the click traces to a person,” says Clemens Deusser, doctoral researcher of Strufe’s team, who was largely involved in the study.
“Another two observations increase the success to more than 50 percent. Then, it is easily obvious from the database which other websites were accessed by the person and which contents were viewed.” Even if the timestamp is stored with the precision of a day, only five additional observations are needed to identify the person.
“Our results suggest that simple generalization is not suited for effectively anonymizing web tracking data. The data remain sharp to the person and anonymization is ineffective. To reach effective data protection, methods extending far beyond have to be applied, such as noise by the random insertion of minor misobservations into the data,” Strufe recommends.
Windows 10 users who upgrade to v2004 will finally be able to switch on a longstanding Windows Defender feature that protects users against potentially unwanted applications (PUAs).
What are PUAs?
Also called PUPs (potentially unwanted programs), PUAs are applications that often cannot be outright classified as malware, but still violate users’ security and privacy interests.
Some examples of PUAs:
- Adware and ad-injectors (software that pushes ads onto users without their permission)
- Software that tracks how users browse the internet (the goal is to sell that information to advertisers)
- Software that pushes premium (paid) services on users and/or saddles them with such services
- Software that installs a root certificate/a proxy server on a user’s device to monitor web traffic passing through it
- Browser hijacking software (e.g., software that modifies users’ browser homepage and search page, steals cookies and hijacks their connections, and performs actions without their knowledge/consent), etc.
Reputation-based Windows protection against PUAs
Windows 10 v2004 (May 2020 Update), which is expected to be available for download later this month, will allow users to block the download and/or opening of potentially unwanted apps by simply switching on a control, which is available via the Windows Start menu:
Start > Settings > Update & Security > Windows Security > App & browser control > Reputation-based protection settings
The Block downloads option will work only for the Microsoft Edge browser, but Block apps will detect already downloaded and installed PUAs, no matter which browser the user uses.
The ability to block PUA downloads was already available to Edge users.
Also, the Windows Defender Antivirus has been able to detect and block PUAs for a while now, but only enterprise admins could enable the protection through Microsoft Intune, Microsoft Endpoint Configuration Manager, Group Policy, or via PowerShell cmdlets.