Firewalls aren’t just for corporate networks. Large numbers of security- or privacy-conscious people also use them to filter or redirect traffic flowing in and out of their computers. Apple recently made a major change to macOS that frustrates these efforts.
Beginning with macOS Catalina released last year, Apple added a list of 50 Apple-specific apps and processes that were to be exempted from firewalls like Little Snitch and Lulu. The undocumented exemption, which didn’t take effect until firewalls were rewritten to implement changes in Big Sur, first came to light in October. Patrick Wardle, a security researcher at Mac and iOS enterprise developer Jamf, further documented the new behavior over the weekend.
In Big Sur Apple decided to exempt many of its apps from being routed thru the frameworks they now require 3rd-party firewalls to use (LuLu, Little Snitch, etc.) 🧐
Q: Could this be (ab)used by malware to also bypass such firewalls? 🤔
A: Apparently yes, and trivially so 😬😱😭 pic.twitter.com/CCNcnGPFIB
— patrick wardle (@patrickwardle) November 14, 2020
To demonstrate the risks that come with this move, Wardle—a former hacker for the NSA—demonstrated how malware developers could exploit the change to make an end-run around a tried-and-true security measure. He set Lulu and Little Snitch to block all outgoing traffic on a Mac running Big Sur and then ran a small programming script that had exploit code interact with one of the apps that Apple exempted. The python script had no trouble reaching a command and control server he set up to simulate one commonly used by malware to exfiltrate sensitive data.
“It kindly asked (coerced?) one of the trusted Apple items to generate network traffic to an attacker-controlled server and could (ab)use this to exfiltrate files,” Wardle, referring to the script, told me. “Basically, ‘Hey, Mr. Apple Item, can you please send this file to Patrick’s remote server?’ And it would kindly agree. And since the traffic was coming from the trusted item, it would never be routed through the firewall… meaning the firewall is 100% blind.”
Wardle tweeted a portion of a bug report he submitted to Apple during the Big Sur beta phase. It specifically warns that “essential security tools such as firewalls are ineffective” under the change.
Apple has yet to explain the reason behind the change. Firewall misconfigurations are often the source of software not working properly. One possibility is that Apple implemented the move to reduce the number of support requests it receives and make the Mac experience better for people not schooled in setting up effective firewall rules. It’s not unusual for firewalls to exempt their own traffic. Apple may be applying the same rationale.
But the inability to override the settings violates a core tenet that people ought to be able to selectively restrict traffic flowing from their own computers. In the event that a Mac does become infected, the change also gives hackers a way to bypass what for many is an effective mitigation against such attacks.
“The issue I see is that it opens the door for doing exactly what Patrick demoed… malware authors can use this to sneak data around a firewall,” Thomas Reed, director of Mac and mobile offerings at security firm Malwarebytes, said. “Plus, there’s always the potential that someone may have a legitimate need to block some Apple traffic for some reason, but this takes away that ability without using some kind of hardware network filter outside the Mac.”
People who want to know what apps and processes are exempt can open the macOS terminal and enter
sudo defaults read /System/Library/Frameworks/NetworkExtension.framework/Resources/Info.plist ContentFilterExclusionList.
The change came as Apple deprecated macOS kernel extensions, which software developers used to make apps interact directly with the OS. The deprecation included NKEs—short for network kernel extensions—that third-party firewall products used to monitor incoming and outgoing traffic.
In place of NKEs, Apple introduced a new user-mode framework called the Network Extension Framework. To run on Big Sur, all third-party firewalls that used NKEs had to be rewritten to use the new framework.
Apple representatives didn’t respond to emailed questions about this change. This post will be updated if they respond later. In the meantime, people who want to override this new exemption will have to find alternatives. As Reed noted above, one option is to rely on a network filter that runs from outside their Mac. Another possibility is to rely on PF, or Packet Filter firewall built into macOS.
Zoom has agreed to upgrade its security practices in a tentative settlement with the Federal Trade Commission, which alleges that Zoom lied to users for years by claiming it offered end-to-end encryption.
“[S]ince at least 2016, Zoom misled users by touting that it offered ‘end-to-end, 256-bit encryption’ to secure users’ communications, when in fact it provided a lower level of security,” the FTC said today in the announcement of its complaint against Zoom and the tentative settlement. Despite promising end-to-end encryption, the FTC said that “Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised.”
The FTC complaint says that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, which were intended for health-care industry users of the video conferencing service. Zoom also claimed it offered end-to-end encryption in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers, the complaint said.
“In fact, Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC complaint said.
The FTC announcement said that Zoom also “misled some users who wanted to store recorded meetings on the company’s cloud storage by falsely claiming that those meetings were encrypted immediately after the meeting ended. Instead, some recordings allegedly were stored unencrypted for up to 60 days on Zoom’s servers before being transferred to its secure cloud storage.”
To settle the allegations, “Zoom has agreed to a requirement to establish and implement a comprehensive security program, a prohibition on privacy and security misrepresentations, and other detailed and specific relief to protect its user base, which has skyrocketed from 10 million in December 2019 to 300 million in April 2020 during the COVID-19 pandemic,” the FTC said. (The 10 million and 300 million figures refer to the number of daily participants in Zoom meetings.)
No compensation for affected users
The settlement is supported by the FTC’s Republican majority, but Democrats on the commission objected because the agreement doesn’t provide compensation to users.
“Today, the Federal Trade Commission has voted to propose a settlement with Zoom that follows an unfortunate FTC formula,” FTC Democratic Commissioner Rohit Chopra said. “The settlement provides no help for affected users. It does nothing for small businesses that relied on Zoom’s data protection claims. And it does not require Zoom to pay a dime. The Commission must change course.”
Under the settlement, “Zoom is not required to offer redress, refunds, or even notice to its customers that material claims regarding the security of its services were false,” Democratic Commissioner Rebecca Kelly Slaughter said. “This failure of the proposed settlement does a disservice to Zoom’s customers, and substantially limits the deterrence value of the case.” While the settlement imposes security obligations, Slaughter said it includes no requirements that directly protect user privacy.
The Zoom/FTC settlement doesn’t actually mandate end-to-end encryption, but Zoom last month announced it is rolling out end-to-end encryption in a technical preview to get feedback from users. The settlement does require Zoom to implement measures “(a) requiring Users to secure their accounts with strong, unique passwords; (b) using automated tools to identify non-human login attempts; (c) rate-limiting login attempts to minimize the risk of a brute force attack; and (d) implementing password resets for known compromised Credentials.”
FTC calls ZoomOpener unfair and deceptive
The FTC complaint and settlement also cover Zoom’s controversial deployment of the ZoomOpener Web server that bypassed Apple security protocols on Mac computers. Zoom “secretly installed” the software as part of an update to Zoom for Mac in July 2018, the FTC said.
“The ZoomOpener Web server allowed Zoom to automatically launch and join a user to a meeting by bypassing an Apple Safari browser safeguard that protected users from a common type of malware,” the FTC said. “Without the ZoomOpener Web server, the Safari browser would have provided users with a warning box, prior to launching the Zoom app, that asked users if they wanted to launch the app.”
The software “increased users’ risk of remote video surveillance by strangers” and “remained on users’ computers even after they deleted the Zoom app, and would automatically reinstall the Zoom app—without any user action—in certain circumstances,” the FTC said. The FTC alleged that Zoom’s deployment of the software without adequate notice or user consent violated US law banning unfair and deceptive business practices.
Amid controversy in July 2019, Zoom issued an update to completely remove the Web server from its Mac application, as we reported at the time.
Zoom agrees to security monitoring
The proposed settlement is subject to public comment for 30 days, after which the FTC will vote on whether to make it final. The 30-day comment period will begin once the settlement is published in the Federal Register. The FTC case and the relevant documents can be viewed here.
The FTC announcement said Zoom agreed to take the following steps:
- Assess and document on an annual basis any potential internal and external security risks and develop ways to safeguard against such risks;
- Implement a vulnerability management program; and
- Deploy safeguards such as multi-factor authentication to protect against unauthorized access to its network; institute data deletion controls; and take steps to prevent the use of known compromised user credentials.
The data deletion part of the settlement requires that all copies of data identified for deletion be deleted within 31 days.
Zoom will have to notify the FTC of any data breaches and will be prohibited “from making misrepresentations about its privacy and security practices, including about how it collects, uses, maintains, or discloses personal information; its security features; and the extent to which users can control the privacy or security of their personal information,” the FTC announcement said.
Zoom will have to review all software updates for security flaws and make sure that updates don’t hamper third-party security features. The company will also have to get third-party assessments of its security program once the settlement is finalized and once every two years after that. That requirement lasts for 20 years.
Zoom issued the following statement about today’s settlement:
The security of our users is a top priority for Zoom. We take seriously the trust our users place in us every day, particularly as they rely on us to keep them connected through this unprecedented global crisis, and we continuously improve our security and privacy programs. We are proud of the advancements we have made to our platform, and we have already addressed the issues identified by the FTC. Today’s resolution with the FTC is in keeping with our commitment to innovating and enhancing our product as we deliver a secure video communications experience.
A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.
The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.
But that’s not all
It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.
Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.
“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.
In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.
Patch on the way
The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.
In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:
We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.
It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.
As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.
Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.
This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.
Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.
The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.
Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.
One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.
Sand’s suspicions were further aroused when he found intents with the following names:
After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.
“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”
Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.
As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.
Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.
The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.
A recently released tool is letting anyone exploit an unusual Mac vulnerability to bypass Apple’s trusted T2 security chip and gain deep system access. The flaw is one researchers have also been using for more than a year to jailbreak older models of iPhones. But the fact that the T2 chip is vulnerable in the same way creates a new host of potential threats. Worst of all, while Apple may be able to slow down potential hackers, the flaw is ultimately unfixable in every Mac that has a T2 inside.
In general, the jailbreak community hasn’t paid as much attention to macOS and OS X as it has iOS, because they don’t have the same restrictions and walled gardens that are built into Apple’s mobile ecosystem. But the T2 chip, launched in 2017, created some limitations and mysteries. Apple added the chip as a trusted mechanism for securing high-value features like encrypted data storage, Touch ID, and Activation Lock, which works with Apple’s “Find My” services. But the T2 also contains a vulnerability, known as Checkm8, that jailbreakers have already been exploiting in Apple’s A5 through A11 (2011 to 2017) mobile chipsets. Now Checkra1n, the same group that developed the tool for iOS, has released support for T2 bypass.
On Macs, the jailbreak allows researchers to probe the T2 chip and explore its security features. It can even be used to run Linux on the T2 or play Doom on a MacBook Pro’s Touch Bar. The jailbreak could also be weaponized by malicious hackers, though, to disable macOS security features like System Integrity Protection and Secure Boot and install malware. Combined with another T2 vulnerability that was publicly disclosed in July by the Chinese security research and jailbreaking group Pangu Team, the jailbreak could also potentially be used to obtain FileVault encryption keys and to decrypt user data. The vulnerability is unpatchable, because the flaw is in low-level, unchangeable code for hardware.
“The T2 is meant to be this little secure black box in Macs—a computer inside your computer, handling things like Lost Mode enforcement, integrity checking, and other privileged duties,” says Will Strafach, a longtime iOS researcher and creator of the Guardian Firewall app for iOS. “So the significance is that this chip was supposed to be harder to compromise—but now it’s been done.”
Apple did not respond to WIRED’s requests for comment.
There are a few important limitations of the jailbreak, though, that keep this from being a full-blown security crisis. The first is that an attacker would need physical access to target devices in order to exploit them. The tool can only run off of another device over USB. This means hackers can’t remotely mass-infect every Mac that has a T2 chip. An attacker could jailbreak a target device and then disappear, but the compromise isn’t “persistent”; it ends when the T2 chip is rebooted. The Checkra1n researchers do caution, though, that the T2 chip itself doesn’t reboot every time the device does. To be certain that a Mac hasn’t been compromised by the jailbreak, the T2 chip must be fully restored to Apple’s defaults. Finally, the jailbreak doesn’t give an attacker instant access to a target’s encrypted data. It could allow hackers to install keyloggers or other malware that could later grab the decryption keys, or it could make it easier to brute-force them, but Checkra1n isn’t a silver bullet.
“There are plenty of other vulnerabilities, including remote ones that undoubtedly have more impact on security,” a Checkra1n team member tweeted on Tuesday.
In a discussion with WIRED, the Checkra1n researchers added that they see the jailbreak as a necessary tool for transparency about T2. “It’s a unique chip, and it has differences from iPhones, so having open access is useful to understand it at a deeper level,” a group member said. “It was a complete black box before, and we are now able to look into it and figure out how it works for security research.”
The exploit also comes as little surprise; it’s been apparent since the original Checkm8 discovery last year that the T2 chip was also vulnerable in the same way. And researchers point out that while the T2 chip debuted in 2017 in top-tier iMacs, it only recently rolled out across the entire Mac line. Older Macs with a T1 chip are unaffected. Still, the finding is significant because it undermines a crucial security feature of newer Macs.
Jailbreaking has long been a gray area because of this tension. It gives users freedom to install and modify whatever they want on their devices, but it is achieved by exploiting vulnerabilities in Apple’s code. Hobbyists and researchers use jailbreaks in constructive ways, including to conduct more security testing and potentially help Apple fix more bugs, but there’s always the chance that attackers could weaponize jailbreaks for harm.
“I had already assumed that since T2 was vulnerable to Checkm8, it was toast,” says Patrick Wardle, an Apple security researcher at the enterprise management firm Jamf and a former NSA researcher. “There really isn’t much that Apple can do to fix it. It’s not the end of the world, but this chip, which was supposed to provide all this extra security, is now pretty much moot.”
Wardle points out that for companies that manage their devices using Apple’s Activation Lock and Find My features, the jailbreak could be particularly problematic both in terms of possible device theft and other insider threats. And he notes that the jailbreak tool could be a valuable jumping off point for attackers looking to take a shortcut to developing potentially powerful attacks. “You likely could weaponize this and create a lovely in-memory implant that, by design, disappears on reboot,” he says. This means that the malware would run without leaving a trace on the hard drive and would be difficult for victims to track down.
The situation raises much deeper issues, though, with the basic approach of using a special, trusted chip to secure other processes. Beyond Apple’s T2, numerous other tech vendors have tried this approach and had their secure enclaves defeated, including Intel, Cisco, and Samsung.
“Building in hardware ‘security’ mechanisms is just always a double-edged sword,” says Ang Cui, founder of the embedded device security firm Red Balloon. “If an attacker is able to own the secure hardware mechanism, the defender usually loses more than they would have if they had built no hardware. It’s a smart design in theory, but in the real world it usually backfires.”
In this case, you’d likely have to be a very high-value target to register any real alarm. But hardware-based security measures do create a single point of failure that the most important data and systems rely on. Even if the Checkra1n jailbreak doesn’t provide unlimited access for attackers, it gives them more than anyone would want.
This story originally appeared on wired.com.
The hackers behind this month’s epic Twitter breach targeted a small number of employees through a “phone spear phishing attack,” the social media site said on Thursday night. When the pilfered employee credentials failed to give access to account support tools, the hackers targeted additional workers who had the permissions needed to access the tools.
“This attack relied on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems,” Twitter officials wrote in a post. “This was a striking reminder of how important each person on our team is in protecting our service. We take that responsibility seriously and everyone at Twitter is committed to keeping your information safe.”
Thursday’s update also disclosed that the hackers downloaded personal data from seven of the accounts, but didn’t say which ones.
The post was the latest update in the investigation into the July 15 hack that hijacked accounts belonging to some of the world’s best-known celebrities, politicians, and executives and caused them to tweet links to Bitcoin scams. A small sampling of the account holders included former Vice President Joe Biden, philanthropist and Microsoft founder and former CEO, and Chairman Bill Gates, Tesla founder Elon Musk, and pop star Kanye West.
It took hours for Twitter to return control of the accounts to their rightful owners. In some cases, the hackers regained control of accounts even after they had been recovered, resulting in a tug of war between the intruders and company employees.
Hours after containing the breach, Twitter said the incident was the result of it losing control of its internal administrative systems to hackers who either paid, tricked, or coerced one or more company employees. Company officials have provided regular updates since then. The most recent one came last week, when Twitter said the hackers used their access to read private messages from 36 hijacked accounts and that phone numbers and other private messages from 130 affected users were viewable.
Free employee rein
Critics said the incident showed that Twitter hasn’t implemented proper controls to prevent sensitive user information from falling into the hands of company insiders or people who target them. Twitter has vowed to investigate how the outsiders gained access to sensitive internal systems and take steps to prevent similar attacks in the future.
Thursday’s update provided more color about how internal systems and account tools work. It said:
A successful attack required the attackers to obtain access to both our internal network as well as specific employee credentials that granted them access to our internal support tools. Not all of the employees that were initially targeted had permissions to use account management tools, but the attackers used their credentials to access our internal systems and gain information about our processes. This knowledge then enabled them to target additional employees who did have access to our account support tools. Using the credentials of employees with access to these tools, the attackers targeted 130 Twitter accounts, ultimately Tweeting from 45, accessing the DM inbox of 36, and downloading the Twitter Data of 7.
The update said that since the attack, the company has “significantly” limited employees’ access to internal tools and systems while the investigation continues. The restrictions are primarily affecting a feature that lets users download their Twitter data, but other services will also be temporarily limited.
“We will be slower to respond to account support needs, reported Tweets, and applications to our developer platform,” the update said. “We’re sorry for any delays this causes, but we believe it’s a necessary precaution as we make durable changes to our processes and tooling as a result of this incident. We will gradually resume our normal response times when we’re confident it’s safe to do so. Thank you for your patience as we work through this.”
Thursday night’s post also said that the company is accelerating unspecified and “pre-existing security workstreams and improvements to our tools” and prioritizing security work across various teams. Twitter is also improving ways to detect and prevent “inappropriate” access to internal systems.
The Android version of DJI Go 4—an app that lets users control drones—has until recently been covertly collecting sensitive user data and can download and execute code of the developers’ choice, researchers said in two reports that question the security and trustworthiness of a program with more than 1 million Google Play downloads.
The app is used to control and collect near real-time video and flight data from drones made by China-based DJI, the world’s biggest maker of commercial drones. The Play Store shows that it has more than 1 million downloads, but because of the way Google discloses numbers, the true number could be as high as 5 million. The app has a rating of three-and-a-half stars out of a possible total of five from more than 52,000 users.
Wide array of sensitive user data
Two weeks ago, security firm Synactive reverse-engineered the app. On Thursday, fellow security firm Grimm published the results of its own independent analysis. At a minimum, both found that the app skirted Google terms and that, until recently, the app covertly collected a wide array of sensitive user data and sent it to servers located in mainland China. A worst-case scenario is that developers are abusing hard-to-identify features to spy on users.
According to the reports, the suspicious behaviors include:
- The ability to download and install any application of the developers’ choice through either a self-update feature or a dedicated installer in a software development kit provided by China-based social media platform Weibo. Both features could download code outside of Play, in violation of Google’s terms.
- A recently removed component that collected a wealth of phone data including IMEI, IMSI, carrier name, SIM serial Number, SD card information, OS language, kernel version, screen size and brightness, wireless network name, address and MAC, and Bluetooth addresses. These details and more were sent to MobTech, maker of a software developer kit used until the most recent release of the app.
- Automatic restarts whenever a user swiped the app to close it. The restarts cause the app to run in the background and continue to make network requests.
- Advanced obfuscation techniques that make third-party analysis of the app time-consuming.
This month’s reports come three years after the US Army banned the use of DJI drones for reasons that remain classified. In January, the Interior Department grounded drones from DJI and other Chinese manufacturers out of concerns data could be sent back to the mainland.
DJI officials said the researchers found “hypothetical vulnerabilities” and that neither report provided any evidence that they were ever exploited.
“The app update function described in these reports serves the very important safety goal of mitigating the use of hacked apps that seek to override our geofencing or altitude limitation features,” they wrote in a statement. Geofencing is a virtual barrier that the Federal Aviation Administration or other authorities bar drones from crossing. Drones use GPS, Bluetooth, and other technologies to enforce the restrictions.
A Google spokesman said the company is looking into the reports. The researchers said the iOS version of the app contained no obfuscation or update mechanisms.
Obfuscated, acquisitive, and always on
In several respects, the researchers said, DJI Go 4 for Android mimicked the behavior of botnets and malware. Both the self-update and auto-install components, for instance, call a developer-designated server and await commands to download and install code or apps. The obfuscation techniques closely resembled those used by malware to prevent researchers from discovering its true purpose. Other similarities were an always-on status and the collection of sensitive data that wasn’t relevant or necessary for the stated purpose of flying drones.
Making the behavior more concerning is the breadth of permissions required to use the app, which include access to contacts, microphone, camera, location, storage, and the ability to change network connectivity. Such sprawling permissions meant that the servers of DJI or Weibo, both located in a country known for its government-sponsored espionage hacking, had almost full control over users’ devices, the researchers said.
Both research teams said they saw no evidence the app installer was ever actually used, but they did see the automatic update mechanism trigger and download a new version from the DJI server and install it. The download URLs for both features are dynamically generated, meaning they are provided by a remote server and can be changed at any time.
The researchers from both firms conducted experiments that showed how both mechanisms could be used to install arbitrary apps. While the programs were delivered automatically, the researchers still had to click their approval before the programs could be installed.
Both research reports stopped short of saying the app actually targeted individuals, and both noted that the collection of IMSIs and other data had ended with the release of current version 4.3.36. The teams, however, didn’t rule out the possibility of nefarious uses. Grimm researchers wrote:
In the best case scenario, these features are only used to install legitimate versions of applications that may be of interest to the user, such as suggesting additional DJI or Weibo applications. In this case, the much more common technique is to display the additional application in the Google Play Store app by linking to it from within your application. Then, if the user chooses to, they can install the application directly from the Google Play Store. Similarly, the self-updating components may only be used to provide users with the most up-to-date version of the application. However, this can be more easily accomplished through the Google Play Store.
In the worst case, these features can be used to target specific users with malicious updates or applications that could be used to exploit the user’s phone. Given the amount of user’s information retrieved from their device, DJI or Weibo would easily be able to identify specific targets of interest. The next step in exploiting these targets would be to suggest a new application (via the Weibo SDK) or update the DJI application with a customized version built specifically to exploit their device. Once their device has been exploited, it could be used to gather additional information from the phone, track the user via the phone’s various sensors, or be used as a springboard to attack other devices on the phone’s WiFi network. This targeting system would allow an attacker to be much stealthier with their exploitation, rather than much noisier techniques, such as exploiting all devices visiting a website.
DJI officials have published an exhaustive and vigorous response that said that all the features and components detailed in the reports either served legitimate purposes or were unilaterally removed and weren’t used maliciously.
“We design our systems so DJI customers have full control over how or whether to share their photos, videos and flight logs, and we support the creation of industry standards for drone data security that will provide protection and confidence for all drone users,” the statement said. It provided the following point-by-point discussion:
- When our systems detect that a DJI app is not the official version – for example, if it has been modified to remove critical flight safety features like geofencing or altitude restrictions – we notify the user and require them to download the most recent official version of the app from our website. In future versions, users will also be able to download the official version from Google Play if it is available in their country. If users do not consent to doing so, their unauthorized (hacked) version of the app will be disabled for safety reasons.
- Unauthorized modifications to DJI control apps have raised concerns in the past, and this technique is designed to help ensure that our comprehensive airspace safety measures are applied consistently.
- Because our recreational customers often want to share their photos and videos with friends and family on social media, DJI integrates our consumer apps with the leading social media sites via their native SDKs. We must direct questions about the security of these SDKs to their respective social media services. However, please note that the SDK is only used when our users proactively turn it on.
- DJI GO 4 is not able to restart itself without input from the user, and we are investigating why these researchers claim it did so. We have not been able to replicate this behavior in our tests so far.
- The hypothetical vulnerabilities outlined in these reports are best characterized as potential bugs, which we have proactively tried to identify through our Bug Bounty Program, where security researchers responsibly disclose security issues they discover in exchange for payments of up to $30,000. Since all DJI flight control apps are designed to work in any country, we have been able to improve our software thanks to contributions from researchers all over the world, as seen on this list.
- The MobTech and Bugly components identified in these reports were previously removed from DJI flight control apps after earlier researchers identified potential security flaws in them. Again, there is no evidence they were ever exploited, and they were not used in DJI’s flight control systems for government and professional customers.
- The DJI GO4 app is primarily used to control our recreational drone products. DJI’s drone products designed for government agencies do not transmit data to DJI and are compatible only with a non-commercially available version of the DJI Pilot app. The software for these drones is only updated via an offline process, meaning this report is irrelevant to drones intended for sensitive government use. A recent security report from Booz Allen Hamilton audited these systems and found no evidence that the data or information collected by these drones is being transmitted to DJI, China, or any other unexpected party.
- This is only the latest independent validation of the security of DJI products following reviews by the U.S. National Oceanic and Atmospheric Administration, U.S. cybersecurity firm Kivu Consulting, the U.S. Department of Interior and the U.S. Department of Homeland Security.
- DJI has long called for the creation of industry standards for drone data security, a process which we hope will continue to provide appropriate protections for drone users with security concerns. If this type of feature, intended to assure safety, is a concern, it should be addressed in objective standards that can be specified by customers. DJI is committed to protecting drone user data, which is why we design our systems so drone users have control of whether they share any data with us. We also are committed to safety, trying to contribute technology solutions to keep the airspace safe.
Don’t forget the Android app mess
The research and DJI’s response underscore the disarray of Google’s current app procurement system. Ineffective vetting, the lack of permission granularity in older versions of Android, and the openness of the operating system make it easy to publish malicious apps in the Play Store. Those same things also make it easy to mistake legitimate functions for malicious ones.
People who have DJI Go 4 for Android installed may want to remove it at least until Google announces the results of its investigation (the reported automatic restart behavior means it’s not sufficient to simply curtail use of the app for the time being). Ultimately, users of the app find themselves in a similar position as that of TikTok, which has also aroused suspicions, both because of some behavior considered sketchy by some and because of its ownership by China-based ByteDance.
There’s little doubt that plenty of Android apps with no ties to China commit similar or worse infractions than those attributed to DJI Go 4 and TikTok. People who want to err on the side of security should steer clear of a large majority of them.
Twitter accounts of the rich and famous—including Elon Musk, Bill Gates, Jeff Bezos, and Joe Biden—were simultaneously hijacked on Wednesday and used to push cryptocurrency scams.
As of 3:58pm California time, one wallet address used to receive victim’s digital coin had received more than $118,000, though it wasn’t clear all of it came from people who fell for the scam. The bitcoin came from 356 transactions that all occurred over about a four-hour span on Tuesday. The wallet address appeared in tweets from at least 15 accounts—some with tens of millions of followers—that promoted fraudulent incentives to transfer money. At least one other Bitcoin wallet was used in the mass scam.
“I’m giving back to all my followers,” one now-deleted tweet from Musk’s account said. “I am doubling all payments sent to the Bitcoin address below. You send 0.1 BTC, I send 0.2 BTC back!” A tweet from the Bezos account said the same thing. “Everyone is asking me to give back, and now is the time,” a Gates tweet said. “I am doubling all payments sent to my BTC address for the next 30 minutes. You send $1,000, I send you back $2,000.”
Another variation of the scam promoted a partnered initiative that pledged to donate 5,000 BTC to the community and included a domain link to send money. The domain was quickly suspended. This variation came early in the hijacking spree and appeared to affect only cryptocurrency-related businesses, including Binance and Gemini.
Other hijacked accounts belonged to Barack Obama, Mike Bloomberg, Apple, Kanye West, Kim Kardashian West, Wiz Khalifa, Warren Buffett, YouTube personality MrBeast, Wendy’s, Uber, CashApp, and a raft of cryptocurrency entrepreneurs. Here’s a sampling of some of the scammy tweets:
At 2:58 PM California time, Musk’s account continued to pump out fraudulent tweets, despite the mass account hijackings being two hours old. What’s more, a screenshot tweeted by a security researcher showed that attackers have changed associated email addresses of some of the hijacked accounts.
That so many social media accounts were taken over in such a short time and remained hijacked for so long is extraordinary if not unprecedented. Previous hijackings that happened to one or two high-profile accounts to promote scams were the result of phishing attacks or the accounts being protected by weak passwords. And in almost all cases, the rightful account holders quickly regained control.
The ability of the attackers to regain control of accounts was also highly unusual. The compromise of so many accounts—many belonging to people who are seasoned in the importance of having good security hygiene—raised serious questions that the compromises were the result of a breach of Twitter’s infrastructure.
A Twitter spokeswoman said company personnel are looking into the cause and would respond soon.
A statement Binance issued said its personnel “confirmed that this Twitter breach was not caused by a vulnerability of Binance’s platform or team members.” The statement didn’t provide any other details about the cause of the hijacking. Binance went on to say: “Our security team has verified that there are zero Binance accounts/users who have sent funds to the hacker’s wallet addresses. The hacker’s wallets are not associated with Binance, and we have prevented all Binance wallet addresses from depositing assets into the hacker’s addresses.”
Emails to some of the other affected account holders weren’t immediately returned.
A spokeswoman for security firm RiskIQ said company researchers were able to track the infrastructure belonging to the party behind Wednesday’s large-scale hack. So far, they have compiled a list of more than 400 associated domains that included cryptoforhealth.com. the site included in the fraudulent tweet from Binance and other cryptocurrency businesses. Many of the domains didn’t respond, while others led to browser warnings like the one below.
As the hijackings continued, Twitter said that while it investigated, it was suspending the ability of many but not all Twitter users to tweet or respond to tweets. Accounts belonging to verified users were unable to tweet or reply to other tweets. Instead they got a message that said: “This request looks like it might be automated. To protect our users from spam and other malicious activity, we can’t complete this action right now. Please try again later.” The suspension didn’t apply to retweets or direct messages. Unverified accounts worked normally.
This is a developing story. This post will be updated as more details become available.
Microsoft is moving forward with its promise to extend enterprise security protections to non-Windows platforms with the general release of a Linux version and a preview of one for Android. The software maker is also beefing up Windows security protections to scan for malicious firmware.
The Linux and Android moves—detailed in posts published on Tuesday here, here, and here—follow a move last year to ship antivirus protections to macOS. Microsoft disclosed the firmware feature last week.
All the new protections are available to users of Microsoft Advanced Threat Protection and require Windows 10 Enterprise Edition. Public pricing from Microsoft is either non-existent or difficult to find, but according to this site, costs range from $30 to $72 per machine per year to enterprise customers.
In February, when the Linux preview became available, Microsoft said it included antivirus alerts and “preventive capabilities.” Using a command line, admins can manage user machines, initiate and configure antivirus scans, monitor network events, and manage various threats.
“We are just at the beginning of our Linux journey and we are not stopping here!” Tuesday’s post announcing the Linux general availability said. “We are committed to continuous expansion of our capabilities for Linux and will be bringing you enhancements in the coming months.”
The Android preview, meanwhile, provides several protections, including:
- The blocking of phishing sites and other high-risk domains and URLs accessed through SMS/text, WhatsApp, email, browsers, and other apps. The features use the same Microsoft Defender SmartScreen services that are already available for Windows so that decisions to block suspicious sites will apply across all devices on a network.
- Proactive scanning for malicious or potentially unwanted applications and files that may be downloaded to a mobile device.
- Measures to block access to network resources when devices show signs of being compromised with malicious apps or malware.
- Integration to the same Microsoft Defender Security Center that’s already available for Windows, macOS, and Linux.
Last week, Microsoft said it had added firmware protection to the premium Microsoft Defender. The new offering scans Unified Extensible Firmware Interface, which is the successor to the traditional BIOS that most computers used during the boot process to locate and enumerate hardware installed.
The firmware scanner uses a new component added to virus protection already built into Defender. Hacks that infect firmware are particularly pernicious because they survive reinstallations of the operating system and other security measures. And because firmware runs before Windows starts, it has the ability to burrow deep into an infected system. Until now, there have been only limited ways to detect such attacks on large fleets of machines.
It makes sense that the extensions to non-Windows platforms are available only to enterprises and cost extra. I was surprised, however, that Microsoft is charging a premium for the firmware protection and only offering it to enterprises. Plenty of journalists, attorneys, and activists are equally if not more threatened by so-called evil maid attacks, in which a housekeeper or other stranger has the ability to tamper with firmware during brief physical access to a computer.
Microsoft has a strong financial incentive to make Windows secure for all users. Company representatives didn’t respond to an email asking if the firmware scanner will become more widely available.
The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.
Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.
ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.
ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.
Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.
CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.
“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”
Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.
Those who do not remember the past
Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.
A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.
It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:
One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.
Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.
Google has shipped security patches for dozens of vulnerabilities in its Android mobile operating system, two of which could allow hackers to remotely execute malicious code with extremely high system rights.
In some cases, the malware could run with highly elevated privileges, a possibility that raises the severity of the bugs. That’s because the bugs, located in the Android System component, could enable a specially crafted transmission to execute arbitrary code within the context of a privileged process. In all, Google released patches for at least 34 security flaws, although some of the vulnerabilities were present only in devices available from manufacturer Qualcomm.
Anyone with a mobile device should check to see if fixes are available for their device. Methods differ by device model, but one common method involves either checking the notification screen or clicking Settings > Security > Security update. Unfortunately, patches aren’t available for many devices.
Two vulnerabilities ranked as critical in Google’s June security bulletin are indexed as CVE-2020-0117 and CVE-2020-8597. They’re among four System flaws located in the Android system (the other two are ranked with a severity of high). The critical vulnerabilities reside in Android versions 8 through the most recent release of 11.
“These vulnerabilities could be exploited through multiple methods such as email, web browsing, and MMS when processing media files,” an advisory from the Department of Homeland Security-funded Multi-State-Information Sharing and Analysis Center said. “Depending on the privileges associated with the application, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”
Vulnerabilities with a severity rating of high affected the Android media framework, the Android framework, and the Android kernel. Other vulnerabilities were contained in components shipped in devices from Qualcomm. The two Qualcomm-specific critical flaws reside in closed source components. The severity of other Qualcomm flaws were rated as high.
The US Senate has become the latest organization to tell its members not to use Zoom because of concerns about data security on the video conferencing platform that has boomed in popularity during the coronavirus crisis.
The Senate sergeant at arms has warned all senators against using the service, according to three people briefed on the advice.
One person who had seen the Senate warning said it told each senator’s office to find an alternative platform to use for remote working while many parts of the US remain in lockdown. But the person added it had stopped short of officially banning the company’s products.
Zoom is battling to stem a public and regulatory backlash over lax privacy practices and rising harassment on the platform that has sent its stock plummeting. The company’s shares have fallen more than 25 per cent from highs just two weeks ago, to trade at $118.91.
Zoom was forced to apologize publicly last week for making misleading statements about the strength of its encryption technology, which is intended to stop outside parties from seeing users’ data.
The company also admitted to “mistakenly” routing user data through China over the past month to cope with a dramatic rise in traffic. Zoom has two servers and a 700-strong research and development arm in China. It had stated that users’ meeting information would stay in the country in which it originated.
The revelations triggered complaints from US senators, several of whom urged the Federal Trade Commission to investigate whether the company had broken consumer protection laws. It also prompted the Taiwanese government to ban Zoom for official business.
The FBI warned last month that it had received reports that teleconferences were being hacked by people sharing pornographic messages or using abusive language — a practice that has become known as “Zoombombing.”
A spokesperson for the company said: “Zoom is working around-the-clock to ensure that universities, schools, and other businesses around the world can stay connected and operational during this pandemic, and we take user privacy, security and trust extremely seriously.
“We appreciate the outreach we have received on these issues from various elected officials and look forward to engaging with them.”
However, the US Department of Homeland Security said in a memo to government cyber security officials that the company was actively responding to concerns and understood how grave they were, according to Reuters. The Pentagon told the Financial Times it would continue to allow its personnel to use Zoom.
The Senate move follows similar decisions by companies including Google, which last week decided to stop employees from downloading the app for work.
“Recently, our security team informed employees using Zoom Desktop Client that it will no longer run on corporate computers as it does not meet our security standards for apps used by our employees,” Jose Castaneda, a Google spokesperson, said. However, he added that employees wanting to use Zoom to stay in touch with family and friends on their mobiles or via a web browser could do so.
The Google decision was first reported by BuzzFeed.
Zoom has tried to stem the tide of criticism in recent days. The company said on Wednesday it had hired Alex Stamos, the former Facebook security chief, as an outside security consultant, days after saying it would redirect its engineering resources to tackle security and privacy issues.
With the coronavirus pandemic forcing millions of people to work, learn, and socialize from home, Zoom conferences are becoming a default method to connect. And with popularity comes abuse. Enter Zoom-bombing, the phenomenon of trolls intruding into other people’s meetings for the sole purpose of harassing attendees, usually by bombarding them with racist or sexually explicit images or statements. A small sample of the events over the past few days:
- An attendee who disrupted an Alcohol Anonymous meeting by shouting misogynistic and anti-Semitic slurs, along with the statement “Alcohol is soooo good,” according to Business Insider. Meeting organizers eventually muted and removed the intruder but only after more than half of the participants had left.
- A Zoom conference hosting students from the Orange County Public Schools system in Florida that was disrupted after an uninvited participant exposed himself to the class.
- An online meeting of black students at the University of Texas that was cut short when it was interrupted by visitors using racial slurs.
As disruptive and offensive as it is, Zoom-bombing is a useful reminder of just how fragile privacy can be in the world of online conferencing. Whereas usual meetings among faculty members, boards of directors, and employees are protected by physical barriers such as walls and closed doors, Zoom conferences can only be secured using other means that many users are unversed in using. What follows are tips for avoiding the most common Zoom conference pitfalls.
Make sure meetings are password protected. The best way to ensure meetings can be accessed only when someone has the password is to ensure that Require a password for instant meetings is turned on in the user settings. Even when the setting is turned off, there’s the ability to require a password when scheduling a meeting. It may not be practical to password protect every meeting, but conference organizers should use this measure as often as possible.
When possible, don’t announce meetings on social media or other public outlets. Instead, send messages only to the participants, using email or group settings in Signal, WhatsApp, or other messenger programs. This advice is especially important if you’re the leader of a country, such as the UK. (Fortunately, Prime Minister Boris Johnson had password-protected the meeting and was prudent enough not to have included the passphrase in his tweet. Even then, his tweet divulged the IDs of multiple participants.)
Carefully inspect the list of participants periodically, whenever possible. This can be done by the organizer or trusted participants. Any users who are unauthorized can be booted. (More about how to do that later.)
Carefully control screen sharing. The user settings allow organizers to set sharing settings by default. People who rarely need sharing should turn it off altogether by sliding the button to the right to off. In the event participants require screen sharing, the slider should be turned on and the setting for only the host to share should be turned on. Organizers should allow all participants to share screens only when the host knows and fully trusts everyone in a meeting.
And while you’re at it
The four measures above are cardinal. Here are a few other suggestions for securing Zoom meetings:
Disable the Join Before Host setting so that organizers can control the meeting from its very start.
Use the Waiting Room option to admit participants. This will prevent admittance of trolls should they have slipped through the two cardinal defenses.
Lock a meeting, when possible, once it’s underway. This will prevent unauthorized people from joining later. Locking a meeting can be accomplished by clicking Manage Participants and using the controls that appear on the right of the meeting window. Manage Participants also allows an organizer to mute all participants, eject select participants, or stop select participants from appearing by video.
Be aware of everything that’s within view of your camera. Whether working from home or an office, there may be diagrams, drawings, notes, or other things you don’t want other participants to see. Remove these from view of the camera before the meeting starts.
Beyond the above advice, Zoom users should consider using a browser to connect to meetings rather than the dedicated Zoom app. I prefer this setting because I believe the attack surface on my system—that is, the number of vulnerabilities a hacker can exploit to breach my security—grows with each app I install. In 2020, most browsers are hardened against attacks. Other types of software are less so.
Zoom makes the Web option difficult to find after clicking on the Join a Meeting link. In my testing on a Windows 10 machine, the option appeared only after I uninstalled the Zoom client. Even then, Zoom pushed an installation file after I tried to join a meeting. I was able to use the browser only after refusing the download and choosing Join from your browser. On a Mac, I was able to find the option, even when I had the Zoom client installed, by clicking cancel on the app installation dialog box. A Chrome extension called Zoom Redirector will also make it easy to find the link (Firefox and Edge versions of the open source addon are here). The permissions required by the extension suggest it’s not much of a privacy or security threat.
Users opting for the browser option will have the best results if they use Chrome. Firefox and other browsers will prevent some key features, such as audio and video, from working at all. As a courtesy, meeting organizers can choose a setting that can make it easier for participants to find the option.
Fortunately, Zoom has disabled an attention-tracking feature that allowed organizers to tell when a participant didn’t have the meeting in focus for more than 30 seconds, for instance, because the participant switched to a different browser tab. This capability was intrusive. It’s great that Zoom removed it.
In October, Ars chronicled the story of a man who was able to remotely start, stop, lock, unlock, and track a Ford explorer he rented and returned five months earlier. Now, something almost identical has happened again to the same Enterprise Rent-A-Car customer. Four days after returning a Ford Mustang, the FordPass app installed on the phone of Masamba Sinclair continues to give him control of the car.
Like the last time, Sinclair could track the car’s location at any given time. He could start and stop the engine and lock and unlock its doors. Enterprise only removed Sinclair’s access to the car on Wednesday, more than three hours after I informed the rental agency of the error.
“It looks like someone else has rented it and it’s currently at a golf resort,” Sinclair wrote on Tuesday in an email. “This car is LOUD so starting the engine will definitely start people asking a lot of questions.” On Wednesday, before his access was removed, he added: “Looks like the previous rental is over and it’s back at the Enterprise parking lot.” Below is a video demonstrating the control he had until then.
We take security and privacy seriously
In October, both Enterprise and Ford said they had mechanisms in place to ensure that FordPass, and other remote apps provided by Ford, were unpaired before vehicles were sold or rented to new customers. The responses were problematic for several reasons. Enterprise, for instance, said rental agreements that customers sign remind them to wipe their data from cars upon their return. The problem is that the reminder doesn’t warn renters of the risks that come when a previous customer’s app remains paired to the vehicle they are renting.
What’s more, customers have little incentive to unpair the app from a car they’re returning. Customers are often scrambling to catch flights and may not want to be bothered searching through menus they’ve never seen before. And since the privacy and security risks fall solely on the new customer, nefarious people returning the car may want to maintain remote access. Unpairing the app by rental agency employees should be standard practice when cars are returned, one that’s no different from vacuuming the car’s carpet or checking its engine.
Ford, meanwhile, maintained that there are several ways drivers can detect when an app has access to their vehicle. The car maker also said it reminds dealerships to unpair cars before being resold.
None of those measures appears to adequately address the risk stemming from people continuing to have control over vehicles after the vehicles have been rented or sold to new customers. Sinclair agrees that he had the ability to unpair his device himself. He said he didn’t do that because he wanted to test the safety procedures put in place by the companies that use and develop the app. An article published last week by KrebsOnSecurity—recounting a man who continued to have remote access to a Ford Focus four years after his lease expired—suggests the problem isn’t isolated.
The problem isn’t that there’s no way to remove previous renters’ or owner’s access to a paired vehicle. Ford vehicles, for instance, display a label on a dashboard screen whenever location sharing, remote start/stop, and remote lock/unlock are active. Popups will also appear on each ignition when location services are active and no known paired Bluetooth devices are detected. The messages can solve the problem only if they’re prominent and clear enough that users recognize the risk. Asked for comment, a Ford spokesman said that the notifications he described in October remained in effect.
Enterprise officials, meanwhile, provided the following statement:
The safety and privacy of our customers is an important priority for us as a company. We appreciate this being brought to our attention and we are actively working to follow up on the issue related to this specific rental that took place last week.
Following the outreach last fall, we updated our car cleaning guidelines related to our master reset procedure. Additionally, we instituted a frequent secondary audit process in coordination with Ford. We also started working with Ford and are very near the completion of testing software with them that will automate the prevention of FordPass pairing by rental customers.
We will use this latest experience as we continue evolving our processes to ensure they best address features and technologies that are continually being added to vehicles.
Vehicles from other manufacturers are likely to have similar features, and like the features provided by Ford, they’re probably easy for many drivers to miss. People renting or buying new cars would do well to read the manuals carefully to learn precisely how remote access works and how to ensure it’s removed from previous customers.
The Federal Bureau of Investigations is in many ways on the front lines of the fight against both cybercrime and cyber-espionage in the US. These days, the organization responds to everything from ransomware attacks to data thefts by foreign government-sponsored hackers. But the FBI has begun to play a role in the defense of networks before attacks have been carried out as well, forming partnerships with some companies to help prevent the loss of critical data.
Sometimes, that involves field agents proactively contacting companies when they have information of a threat—as two FBI agents did when they caught wind of researchers trying to alert casinos of vulnerabilities they said they had found in casino kiosk systems. “We have agents in every field office spending a large amount of time going out to companies in their area of responsibility establishing relationships,” Long T. Chu, acting assistant section chief for the FBI’s Cyber Engagement and Intelligence Section, told Ars. “And this is really key right now—before there’s a problem, providing information to help these companies prepare their defenses. And we try to provide as specific information as we can.”
But the FBI is not stopping its consultative role at simply alerting companies to threats. An FBI flyer shown to Ars by a source broadly outlined a new program aimed at helping companies fight data theft “caused by an insider with illicit access (or systems administrator), or by a remote cyber actor.” The program, called IDLE (Illicit Data Loss Exploitation), does this by creating “decoy data that is used to confuse illicit… collection and end use of stolen data.” It’s a form of defensive deception—or as officials would prefer to refer to it, obfuscation—that the FBI hopes will derail all types of attackers, particularly advanced threats from outside and inside the network.
In a discussion about the FBI’s overall philosophy on fighting cybercrime, Chu told Ars that the FBI is “taking more of a holistic approach” these days. Instead of reacting to specific events or criminal actors, he said, “we’re looking at cyber crime from a key services aspect”—aka, what are the things that cybercriminals target?—”and how that affects the entire cyber criminal ecosystem. What are the centers of gravity, what are the key services that play into that?”
In the past, the FBI got involved only when a crime was reported. But today, the new approach means playing more of a consultative role to prevent cybercrime through partnerships with both other government agencies and the private sector. “If you ever have the opportunity to go to the courtyard at FBI Headquarters, there’s a quote there. ‘The most effective weapon against crime is cooperation, the efforts of all law enforcement and the support and understanding of the American people.’ That can not be more true today, but it expands from beyond just law enforcement to the private sector,” Chu said. “That’s because we’re facing one of the greatest threats that our nation has ever faced, arguably, and that’s the cyber threat.”
An example of that sort of outreach was visible in a case Ars reported on in March—that of the casino kiosk vendor Atrient. FBI Las Vegas field office and FBI Cyber Division agents picked up on Twitter posts about an alleged vulnerability in Atrient’s infrastructure, and the agents connected the company and an affected customer with the researchers to resolve the issue (which, in Atrient’s case at least, went somewhat awry). But in these situations, the FBI now also shares information it gathers from other sources, including data gathered from ongoing investigations.
Sharing happens a lot faster, Chu said, when there’s a “preexisting relationship with our partners, so we know exactly who we need to call and vice versa.” And information flows faster when it goes both ways. “Just as we’re trying hard to get the private industry information as fast as possible, it’d be a lot more effective if we’re getting information from the private industry as well,” he said. Exchanging information about IP addresses, indicators of compromise, and other threat data allows the FBI to aggregate the data, “run that against our databases and all our resources, and come up with a much stronger case, so to speak, against our adversaries,” Chu noted, “along with trying to attribute or identify who did it will prevent further attacks from happening.”
Some information sharing takes the form of collaboration with industry information sharing and analysis centers (ISACs) and “Flash” and “Private Industry Notice” (PIN) alerts on cybercrime issues. And to build more direct relationships with companies’ security executives, the FBI also offers a “CISO Academy” for chief information security officers twice a year at the FBI Academy in Quantico, Virginia. Attendees are indoctrinated on the FBI’s investigation approaches, and they learn what kind of evidence needs to be preserved to help spur investigations forward.
But for some sectors of particular interest, the FBI is now trying to get a deeper level of collaboration going—especially with companies in the defense industry base (DIB) and other critical infrastructure industries. The FBI sees these areas as crucial industry-spanning networks, and it hopes to build a defense in-depth against cyber-espionage, intellectual property theft, and exposure of other data that could be used particularly by other nations in a way that could impact national security or the economy.
That’s precisely where IDLE comes in.
US convenience store Wawa said on Thursday that it recently discovered malware that skimmed customers’ payment card data at just about all of its 850 stores.
The infection began rolling out to the store’s payment-processing system on March 4 and wasn’t discovered until December 10, an advisory published on the company’s website said. It took two more days for the malware to be fully contained. Most locations’ point-of-sale systems were affected by April 22, 2019, although the advisory said some locations may not have been affected at all.
The malware collected payment card numbers, expiration dates, and cardholder names from payment cards used at “potentially all Wawa in-store payment terminals and fuel dispensers.” The advisory didn’t say how many customers or cards were affected. The malware didn’t access debit card PINs, credit card CVV2 numbers, or driver license data used to verify age-restricted purchases. Information processed by in-store ATMs was also not affected. The company has hired an outside forensics firm to investigate the infection.
Thursday’s disclosure came after Visa issued two security alerts—one in November and another this month—warning of payment-card-skimming malware at North American gasoline pumps. Card readers at self-service fuel pumps are particularly vulnerable to skimming because they continue to read payment data from cards’ magnetic stripes rather than card chips, which are much less susceptible to skimmers.
In the November advisory, Visa officials wrote:
The recent attacks are attributed to two sophisticated criminal groups with a history of large-scale, successful compromises against merchants in various industries. The groups gain access to the targeted merchant’s network, move laterally within the network using malware toolsets, and ultimately target the merchant’s POS environment to scrape payment card data. The groups also have close ties with the cybercrime underground and are able to easily monetize the accounts obtained in these attacks by selling the accounts to the top tier cybercrime underground carding shops.
The December advisory said that two of three attacks bore the hallmarks of Fin8, an organized cybercrime group that has targeted retailers since 2016. There’s no indication the Wawa infections have any connection to the ones in the Visa advisories.
People who have used payment cards at a Wawa location should pay close attention to billing statements over the past eight months. It’s always a good idea to regularly review credit reports as well. Wawa said it will provide one year of identity-theft protection and credit monitoring from credit-reporting service Experian at no charge. Thursday’s disclosure lists other steps card holders can take.
Federal authorities say they have taken custody of a UK man who was a member of The Dark Overlord, a group that has taken credit for hacking into more than a dozen companies, stolen valuable data, and then demanded ransoms for its return. Stolen material included then-unreleased episodes of popular television shows and millions of patient records.
Nathan Wyatt, 39, was extradited from the United Kingdom to St. Louis, Missouri, after losing a year-long legal fight to block the transfer. Wyatt was arraigned in US District Court for the Eastern District of Missouri on Wednesday. He pleaded not guilty.
An indictment unsealed in the case alleged Wyatt participated in hacks on three healthcare providers, a medical records company, and an accounting firm. The indictment said Wyatt conspired with other members of The Dark Overlord to hack into the companies, steal their valuable data, and threaten to publish it unless they received payments in bitcoin.
The hackers allegedly contacted executives of the hacked companies by email and SMS text messages. When hacked companies were slow to pay the ransoms, the messages often contained threats or taunts. In July 2017, a member of the group sent a text to the daughter of an owner of one of the healthcare providers. One text read: “Hi… you look peaceful… by the way did your daddy tell you he refused to pay us when we stole his company files in 4 days we will be releasing for sale thousands of patient info. Including yours…”
Prosecutors said that Wyatt registered two phone numbers used in the crimes. One number registered a VPN account and Twitter account used in the scheme. The other number sent threatening and extortionate messages to hacked parties.
The indictment made no mention of more than a dozen other hacks that matched the same mode of operation and for which the group took credit. Among them was the release of nine episodes of Orange Is the New Black in April 2017. At the time, the episodes were unavailable on Netflix. According to DataBreaches.net, which has extensively covered The Dark Overlord hacks, the group managed to breach Larson Studios, a post-production facility, and make off with the following TV shows or movies:
- A Midsummer’s Nightmare – TV Movie
- Above Suspicion – Film
- Bill Nye Saves the World – TV Series
- Breakthrough – TV Series
- Brockmire – TV Series
- Bunkd – TV Series
- Celebrity Apprentice (The Apprentice) – TV Series
- Food Fact or Fiction – TV Series
- Handsome – Film
- Hopefuls – TV Series
- Hum – Short
- It’s Always Sunny in Philadelphia – TV Series
- Jason Alexander Project – TV Series
- Liza Koshy Special – YoutubeRed
- Lucha Underground – TV Series
- Lucky Roll – TV Series
- Making History – TV Series
- Man Seeking Woman – TV Series
- Max and Shred – TV Series
- Mega Park – TV Series
- NCIS Los Angeles – TV Series
- New Girl – TV Series
DataBreaches.net reported that The Dark Overlord was behind hacks on more than a dozen other companies, including ABC Networks, an insurance firm, a plastic surgery clinic, the maker of Gorilla Glue, a real estate company, and a human resources firm, to name just a few.
The Daily Beast reported in late 2017 that members of The Dark Overlord sent texts to parents in Iowa threatening to kill their kids. The Courier Journal reported death threats made to middle and high schools in the name of The Dark Overlord.
Last year, according to Bleeping Computer, authorities in Serbia arrested another alleged member of The Dark Overlord. The suspect was identified only by initials, making it hard to track the outcome of the arrest.
Wyatt was reportedly arrested in 2016 in connection to the theft of intimate and nude photos from the iCloud account of Pippa Middleton, sister of Kate Middleton, the Duchess of Cambridge. He was eventually released with no charges filed. In 2017, according to DataBreaches.net, Wyatt pleaded guilty to 20 counts of fraud by false representation, two counts of blackmail, and one count of possession of an identity document with intent to deceive (a false passport).
In January, while in custody in the UK, prosecutors unearthed evidence that Wyatt was involved in extortions carried out by The Dark Overlord. He has been fighting extradition ever since.
Many IT workers worry their positions will become obsolete as changes in hardware, software, and computing tasks outstrip their skills. A former contractor for Siemens concocted a remedy for that—plant logic bombs in projects he designed that caused them to periodically malfunction. Then wait for a call to come fix things.
On Monday, David A. Tinley, a 62-year-old from Harrison City, Pennsylvania, was sentenced to six months in prison and a fine of $7,500 in the scheme. The sentence came five months after he pleaded guilty to a charge of intentional damage to a protected computer. Tinley was a contract employee for Siemens Corporation at its Monroeville, Pennsylvania, location.
According to a charging document filed in US District Court for the Western District of Pennsylvania, the logic bombs Tinley surreptitiously planted into his projects caused them to malfunction after a certain preset amount of time. Because Siemens managers were unaware of the logic bombs and didn’t know the cause of the malfunctions, they would call Tinley and ask him to fix the misbehaving projects. The scheme ran from 2014 to 2016.
Tinley will be under supervised release for two years following his prison term. He will also pay restitution. The parties in the case stipulated a total loss amount of $42,262.50. Tinley faced as much as 10 years in prison and a $250,000 fine.
Canada’s biggest provider of specialty laboratory testing services said it paid hackers an undisclosed amount for the return of personal data they stole belonging to as many as 15 million customers.
Toronto, Ontario-based LifeLabs Notified Canadian authorities of the attack on November 1. The company said a cyberattack struck computer systems that stored data for about 15 million customers. The stolen information included names, addresses, email addresses, customer logins and passwords, health card numbers, and lab tests.
The incident response, company President and CEO Charles Brown said in a statement, included “retrieving the data by making a payment.” The executive added: “We did this in collaboration with experts familiar with cyber-attacks and negotiations with cyber criminals.” The statement didn’t say how much LifeLabs paid for the return of the data. Representatives didn’t immediately respond to an email seeking the amount.
According to an advisory issued by the Office of the Information and Privacy Commissioner of Ontario and the Office of the Information and Privacy Commissioner for British Columbia: “LifeLabs advised our offices that cyber criminals penetrated the company’s systems, extracting data and demanding a ransom. LifeLabs retained outside cybersecurity consultants to investigate and assist with restoring the security of the data.”
LifeLabs said that its investigation so far indicates that the accessed test results were from 2016 or earlier and belonged to about 85,000 customers. Accessed health card information was also from 2016 or earlier. So far, there’s no indication any of the stolen data has been distributed to parties other than LifeLabs.
The LifeLabs statement said that company officials have fixed the system that led to the breach. The company is providing a year of free identity theft monitoring and identity theft insurance. Affected customers can sign up for the help here.
It’s true that inorganic users don’t yell at customer-service reps or trash-talk companies on Twitter. But connected devices can also benefit from some less-obvious upgrades that 5G should deliver—and we, their organic overlords, could profit in the long run.
You may have heard about 5G’s Internet-of-Things potential yourself in such gauzy statements as “5G will make every industry and every part of our lives better” (spoken by Meredith Attwell Baker, president of the wireless trade group CTIA, at the MWC Americas trade show in 2017) and “It’s a wholly new technology ushering in a new era of transformation” (from Ronan Dunne, executive vice president and CEO of Verizon’s consumer group, at 2019’s Web Summit conference).
But as with 5G in the smartphone and home-broadband contexts, the ripple effects alluded to in statements are potentially huge—and they will take years to land on our shores. Yes, you’ve heard this before: the news is big, but it’s still early days.
Massively multiplayer mobile bandwidth
The long-term map for 5G IoT promises to support a density of devices far beyond what current-generation LTE can deliver—up to a million “things” per square kilometer, versus almost 61,000 under today’s 4G. That density will open up possibilities that today would require a horrendous amount of wired connectivity.
For example, precision-controlled factories could take advantage of the space in the airwaves to implement extremely granular monitoring, and 5G IoT promises to do that job for less. “You can put tons of environmental sensors everywhere,” said Recon Analytics founder Roger Entner. “You can put a tag on every piece of equipment.”
“Either I upgrade this to fiber to connect the machines, or I use millimeter-wave 5G in the factory,” echoes Rüdiger Schicht, a senior partner with the Boston Consulting Group. “Everything we hear on reliability and manageability of that infrastructure indicates that 5G is superior.”
Millimeter-wave 5G runs on bands of frequencies starting at 24GHz, far above the frequencies employed for LTE. The enormous amounts of free spectrum up there allow for gigabit speeds—at the cost of range, which would be limited to a thousand feet or so. That still exceeds Wi-Fi’s reach, though.
Low-band 5G on the same frequencies today used for 4G doesn’t allow for a massive speed boost but should at least cover far more ground, while mid-band 5G should offer a good mix of speed and coverage—at least, once carriers have more free spectrum on which to provide that coverage. (If you’d like a quick refresher on the various flavors of 5G, our story from a couple of weeks ago has you covered!)
In the United States, fixing those spectrum issues hinges on the Federal Communications Commission’s recently-announced plan to auction off 280MHz of so-called C-band spectrum, between 3.2MHz and 3.98MHz, on a sped-up timetable that could see those bands in service in two to three years.
And that means there’s some time to figure things out. Companies aren’t lighting up connected devices by the millions just yet.
The current 5G standard—formally speaking, 3GPP Release 15—does not include support for the enormous device density we’re talking about. That will have to wait until Release 16, now in its final stages of approval, although Entner warns that we won’t see compatible hardware for at least another year or two.
The Federal Communications Commission is giving $87.1 million in rural-broadband funding to satellite operator Viasat to help the company lower prices and raise data caps.
The FCC’s Connect America Fund generally pays ISPs to expand their networks into rural areas that lack decent home Internet access. Viasat’s satellite service already provides coverage of 98 percent of the US population in 50 states, so it doesn’t need government funding to expand its network the same way that wireline operators do. But Viasat will use the money to offer Internet service “at lower cost to consumers, while also permitting higher usage allowances, than it typically provides in areas where it is not receiving Connect America Fund support,” the FCC said in its announcement yesterday.
Viasat’s $87.1 million is to be used over the next 10 years “to offer service to more than 121,700 remote and rural homes and businesses in 17 states.” Viasat must provide speeds of at least 25Mbps for downloads and 3Mbps for uploads.
While the funding for Viasat could certainly improve access for some people, the project helps illustrate how dire the broadband shortage is in rural parts of many states. Viasat’s service is generally a last-ditch option for people in areas where there’s no fiber or cable and where DSL isn’t good enough to provide a reasonably fast and stable connection. Viasat customers have to pay high prices for slow speeds and onerous data limits.
Future services relying on low-Earth-orbit satellites from companies such as SpaceX and OneWeb could dramatically boost speeds and data caps while lowering latency. But Viasat’s service still relies on satellites in geostationary orbits about 22,000 miles above the planet and suffer from latency of nearly 600ms, much worse than the 10ms to 20ms from fiber services (as measured in customer homes by the FCC in September 2017). Viasat’s service is classified by the FCC’s Connect America Fund as “high latency,” which is less than or equal to 750ms.
The Connect America Fund is paid for by Americans through fees on their phone bills.
Prices and data caps not revealed
A Viasat spokesperson would not tell us what prices and data caps will be applied to the company’s FCC-subsidized plans. Viasat said it will provide the required 25Mbps service “along with an evolving usage allowance, and at FCC-defined prices, to certain areas, where we will be subject to a new range of federal and state regulations.”
The materials released by the FCC yesterday don’t provide price and data-cap information, either. We contacted the FCC and will update this article if we get any answers.
Viasat’s current prices and data allotments are pretty bad, so hopefully there will be a significant improvement. Plans and pricing vary by ZIP code; offers listed on BroadbandNow include $50 a month for download speeds of up to 12Mbps and only 12GB of “priority data” each month. The price rises after a two-year contract expires.
“Once priority data is used up, speeds will be reduced to up to 1 to 5Mbps during the day and possibly below 1Mbps after 5pm,” BroadbandNow’s summary says. Customers can use data without affecting the limit between 3am and 6am.
Other plans include $75 a month for speeds of 12Mbps and 25GB of priority data; $100 a month for 12Mbps and 50GB; and $150 a month for 25Mbps and “unlimited” data. Even on the so-called unlimited plan, speeds “may be prioritized behind other customers during network congestion” after you use 100GB in a month. Because of these onerous limits, Viasat lowers streaming video quality to reduce data usage. Viasat says it provides speeds of up to 100Mbps but only “in select areas.”
Viasat also charges installation fees, a $10-per-month equipment lease fee, and taxes and surcharges. Viasat offers a two-year price lock, but this does not apply to the taxes and surcharges. In order to avoid signing a two-year contract, you have to pay a $300 “No Long-Term Contract” fee.