tech

Apple lets some Big Sur network traffic bypass firewalls

A somewhat cartoonish diagram illustrates issues with a firewall.

Patrick Wardle

Firewalls aren’t just for corporate networks. Large numbers of security- or privacy-conscious people also use them to filter or redirect traffic flowing in and out of their computers. Apple recently made a major change to macOS that frustrates these efforts.

Beginning with macOS Catalina released last year, Apple added a list of 50 Apple-specific apps and processes that were to be exempted from firewalls like Little Snitch and Lulu. The undocumented exemption, which didn’t take effect until firewalls were rewritten to implement changes in Big Sur, first came to light in October. Patrick Wardle, a security researcher at Mac and iOS enterprise developer Jamf, further documented the new behavior over the weekend.

“100% blind”

To demonstrate the risks that come with this move, Wardle—a former hacker for the NSA—demonstrated how malware developers could exploit the change to make an end-run around a tried-and-true security measure. He set Lulu and Little Snitch to block all outgoing traffic on a Mac running Big Sur and then ran a small programming script that had exploit code interact with one of the apps that Apple exempted. The python script had no trouble reaching a command and control server he set up to simulate one commonly used by malware to exfiltrate sensitive data.

“It kindly asked (coerced?) one of the trusted Apple items to generate network traffic to an attacker-controlled server and could (ab)use this to exfiltrate files,” Wardle, referring to the script, told me. “Basically, ‘Hey, Mr. Apple Item, can you please send this file to Patrick’s remote server?’ And it would kindly agree. And since the traffic was coming from the trusted item, it would never be routed through the firewall… meaning the firewall is 100% blind.”

Wardle tweeted a portion of a bug report he submitted to Apple during the Big Sur beta phase. It specifically warns that “essential security tools such as firewalls are ineffective” under the change.

Apple has yet to explain the reason behind the change. Firewall misconfigurations are often the source of software not working properly. One possibility is that Apple implemented the move to reduce the number of support requests it receives and make the Mac experience better for people not schooled in setting up effective firewall rules. It’s not unusual for firewalls to exempt their own traffic. Apple may be applying the same rationale.

But the inability to override the settings violates a core tenet that people ought to be able to selectively restrict traffic flowing from their own computers. In the event that a Mac does become infected, the change also gives hackers a way to bypass what for many is an effective mitigation against such attacks.

“The issue I see is that it opens the door for doing exactly what Patrick demoed… malware authors can use this to sneak data around a firewall,” Thomas Reed, director of Mac and mobile offerings at security firm Malwarebytes, said. “Plus, there’s always the potential that someone may have a legitimate need to block some Apple traffic for some reason, but this takes away that ability without using some kind of hardware network filter outside the Mac.”

People who want to know what apps and processes are exempt can open the macOS terminal and enter sudo defaults read /System/Library/Frameworks/NetworkExtension.framework/Resources/Info.plist ContentFilterExclusionList.

NKEs

The change came as Apple deprecated macOS kernel extensions, which software developers used to make apps interact directly with the OS. The deprecation included NKEs—short for network kernel extensions—that third-party firewall products used to monitor incoming and outgoing traffic.

In place of NKEs, Apple introduced a new user-mode framework called the Network Extension Framework. To run on Big Sur, all third-party firewalls that used NKEs had to be rewritten to use the new framework.

Apple representatives didn’t respond to emailed questions about this change. This post will be updated if they respond later. In the meantime, people who want to override this new exemption will have to find alternatives. As Reed noted above, one option is to rely on a network filter that runs from outside their Mac. Another possibility is to rely on PF, or Packet Filter firewall built into macOS.

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.

The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.

But that’s not all

It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.

Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.

“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.

In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.

Patch on the way

The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.

In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:

We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.

It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.

As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.

Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.

This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.

Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.

The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.

Heroic measures

Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.

A modified USB cable attached to the back of an X4 watch.

Enlarge / A modified USB cable attached to the back of an X4 watch.
Mnemonic

One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.

Sand’s suspicions were further aroused when he found intents with the following names:

  • WIRETAP_INCOMING
  • WIRETAP_BY_CALL_BACK
  • COMMAND_LOG_UPLOAD
  • REMOTE_SNAPSHOT
  • SEND_SMS_LOCATION

After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.

“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”

Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.

As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.

Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.

The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.

Apple’s T2 security chip has an unfixable flaw

2014 Mac mini and 2012 Mac mini

Enlarge / The 2014 Mac mini is pictured here alongside the 2012 Mac mini. They looked the same, but the insides were different in some key—and disappointing—ways.

A recently released tool is letting anyone exploit an unusual Mac vulnerability to bypass Apple’s trusted T2 security chip and gain deep system access. The flaw is one researchers have also been using for more than a year to jailbreak older models of iPhones. But the fact that the T2 chip is vulnerable in the same way creates a new host of potential threats. Worst of all, while Apple may be able to slow down potential hackers, the flaw is ultimately unfixable in every Mac that has a T2 inside.

In general, the jailbreak community hasn’t paid as much attention to macOS and OS X as it has iOS, because they don’t have the same restrictions and walled gardens that are built into Apple’s mobile ecosystem. But the T2 chip, launched in 2017, created some limitations and mysteries. Apple added the chip as a trusted mechanism for securing high-value features like encrypted data storage, Touch ID, and Activation Lock, which works with Apple’s “Find My” services. But the T2 also contains a vulnerability, known as Checkm8, that jailbreakers have already been exploiting in Apple’s A5 through A11 (2011 to 2017) mobile chipsets. Now Checkra1n, the same group that developed the tool for iOS, has released support for T2 bypass.

On Macs, the jailbreak allows researchers to probe the T2 chip and explore its security features. It can even be used to run Linux on the T2 or play Doom on a MacBook Pro’s Touch Bar. The jailbreak could also be weaponized by malicious hackers, though, to disable macOS security features like System Integrity Protection and Secure Boot and install malware. Combined with another T2 vulnerability that was publicly disclosed in July by the Chinese security research and jailbreaking group Pangu Team, the jailbreak could also potentially be used to obtain FileVault encryption keys and to decrypt user data. The vulnerability is unpatchable, because the flaw is in low-level, unchangeable code for hardware.

“The T2 is meant to be this little secure black box in Macs—a computer inside your computer, handling things like Lost Mode enforcement, integrity checking, and other privileged duties,” says Will Strafach, a longtime iOS researcher and creator of the Guardian Firewall app for iOS. “So the significance is that this chip was supposed to be harder to compromise—but now it’s been done.”

Apple did not respond to WIRED’s requests for comment.

There are a few important limitations of the jailbreak, though, that keep this from being a full-blown security crisis. The first is that an attacker would need physical access to target devices in order to exploit them. The tool can only run off of another device over USB. This means hackers can’t remotely mass-infect every Mac that has a T2 chip. An attacker could jailbreak a target device and then disappear, but the compromise isn’t “persistent”; it ends when the T2 chip is rebooted. The Checkra1n researchers do caution, though, that the T2 chip itself doesn’t reboot every time the device does. To be certain that a Mac hasn’t been compromised by the jailbreak, the T2 chip must be fully restored to Apple’s defaults. Finally, the jailbreak doesn’t give an attacker instant access to a target’s encrypted data. It could allow hackers to install keyloggers or other malware that could later grab the decryption keys, or it could make it easier to brute-force them, but Checkra1n isn’t a silver bullet.

“There are plenty of other vulnerabilities, including remote ones that undoubtedly have more impact on security,” a Checkra1n team member tweeted on Tuesday.

In a discussion with WIRED, the Checkra1n researchers added that they see the jailbreak as a necessary tool for transparency about T2. “It’s a unique chip, and it has differences from iPhones, so having open access is useful to understand it at a deeper level,” a group member said. “It was a complete black box before, and we are now able to look into it and figure out how it works for security research.”

The exploit also comes as little surprise; it’s been apparent since the original Checkm8 discovery last year that the T2 chip was also vulnerable in the same way. And researchers point out that while the T2 chip debuted in 2017 in top-tier iMacs, it only recently rolled out across the entire Mac line. Older Macs with a T1 chip are unaffected. Still, the finding is significant because it undermines a crucial security feature of newer Macs.

Jailbreaking has long been a gray area because of this tension. It gives users freedom to install and modify whatever they want on their devices, but it is achieved by exploiting vulnerabilities in Apple’s code. Hobbyists and researchers use jailbreaks in constructive ways, including to conduct more security testing and potentially help Apple fix more bugs, but there’s always the chance that attackers could weaponize jailbreaks for harm.

“I had already assumed that since T2 was vulnerable to Checkm8, it was toast,” says Patrick Wardle, an Apple security researcher at the enterprise management firm Jamf and a former NSA researcher. “There really isn’t much that Apple can do to fix it. It’s not the end of the world, but this chip, which was supposed to provide all this extra security, is now pretty much moot.”

Wardle points out that for companies that manage their devices using Apple’s Activation Lock and Find My features, the jailbreak could be particularly problematic both in terms of possible device theft and other insider threats. And he notes that the jailbreak tool could be a valuable jumping off point for attackers looking to take a shortcut to developing potentially powerful attacks. “You likely could weaponize this and create a lovely in-memory implant that, by design, disappears on reboot,” he says. This means that the malware would run without leaving a trace on the hard drive and would be difficult for victims to track down.

The situation raises much deeper issues, though, with the basic approach of using a special, trusted chip to secure other processes. Beyond Apple’s T2, numerous other tech vendors have tried this approach and had their secure enclaves defeated, including Intel, Cisco, and Samsung.

“Building in hardware ‘security’ mechanisms is just always a double-edged sword,” says Ang Cui, founder of the embedded device security firm Red Balloon. “If an attacker is able to own the secure hardware mechanism, the defender usually loses more than they would have if they had built no hardware. It’s a smart design in theory, but in the real world it usually backfires.”

In this case, you’d likely have to be a very high-value target to register any real alarm. But hardware-based security measures do create a single point of failure that the most important data and systems rely on. Even if the Checkra1n jailbreak doesn’t provide unlimited access for attackers, it gives them more than anyone would want.

This story originally appeared on wired.com.

Chinese-made drone app in Google Play spooks security researchers

A DJI Phantom 4 quadcopter drone.

Enlarge / A DJI Phantom 4 quadcopter drone.

The Android version of DJI Go 4—an app that lets users control drones—has until recently been covertly collecting sensitive user data and can download and execute code of the developers’ choice, researchers said in two reports that question the security and trustworthiness of a program with more than 1 million Google Play downloads.

The app is used to control and collect near real-time video and flight data from drones made by China-based DJI, the world’s biggest maker of commercial drones. The Play Store shows that it has more than 1 million downloads, but because of the way Google discloses numbers, the true number could be as high as 5 million. The app has a rating of three-and-a-half stars out of a possible total of five from more than 52,000 users.

Wide array of sensitive user data

Two weeks ago, security firm Synactive reverse-engineered the app. On Thursday, fellow security firm Grimm published the results of its own independent analysis. At a minimum, both found that the app skirted Google terms and that, until recently, the app covertly collected a wide array of sensitive user data and sent it to servers located in mainland China. A worst-case scenario is that developers are abusing hard-to-identify features to spy on users.

According to the reports, the suspicious behaviors include:

  • The ability to download and install any application of the developers’ choice through either a self-update feature or a dedicated installer in a software development kit provided by China-based social media platform Weibo. Both features could download code outside of Play, in violation of Google’s terms.
  • A recently removed component that collected a wealth of phone data including IMEI, IMSI, carrier name, SIM serial Number, SD card information, OS language, kernel version, screen size and brightness, wireless network name, address and MAC, and Bluetooth addresses. These details and more were sent to MobTech, maker of a software developer kit used until the most recent release of the app.
  • Automatic restarts whenever a user swiped the app to close it. The restarts cause the app to run in the background and continue to make network requests.
  • Advanced obfuscation techniques that make third-party analysis of the app time-consuming.

This month’s reports come three years after the US Army banned the use of DJI drones for reasons that remain classified. In January, the Interior Department grounded drones from DJI and other Chinese manufacturers out of concerns data could be sent back to the mainland.

DJI officials said the researchers found “hypothetical vulnerabilities” and that neither report provided any evidence that they were ever exploited.

“The app update function described in these reports serves the very important safety goal of mitigating the use of hacked apps that seek to override our geofencing or altitude limitation features,” they wrote in a statement. Geofencing is a virtual barrier that the Federal Aviation Administration or other authorities bar drones from crossing. Drones use GPS, Bluetooth, and other technologies to enforce the restrictions.

A Google spokesman said the company is looking into the reports. The researchers said the iOS version of the app contained no obfuscation or update mechanisms.

Obfuscated, acquisitive, and always on

In several respects, the researchers said, DJI Go 4 for Android mimicked the behavior of botnets and malware. Both the self-update and auto-install components, for instance, call a developer-designated server and await commands to download and install code or apps. The obfuscation techniques closely resembled those used by malware to prevent researchers from discovering its true purpose. Other similarities were an always-on status and the collection of sensitive data that wasn’t relevant or necessary for the stated purpose of flying drones.

Making the behavior more concerning is the breadth of permissions required to use the app, which include access to contacts, microphone, camera, location, storage, and the ability to change network connectivity. Such sprawling permissions meant that the servers of DJI or Weibo, both located in a country known for its government-sponsored espionage hacking, had almost full control over users’ devices, the researchers said.

Both research teams said they saw no evidence the app installer was ever actually used, but they did see the automatic update mechanism trigger and download a new version from the DJI server and install it. The download URLs for both features are dynamically generated, meaning they are provided by a remote server and can be changed at any time.

The researchers from both firms conducted experiments that showed how both mechanisms could be used to install arbitrary apps. While the programs were delivered automatically, the researchers still had to click their approval before the programs could be installed.

Both research reports stopped short of saying the app actually targeted individuals, and both noted that the collection of IMSIs and other data had ended with the release of current version 4.3.36. The teams, however, didn’t rule out the possibility of nefarious uses. Grimm researchers wrote:

In the best case scenario, these features are only used to install legitimate versions of applications that may be of interest to the user, such as suggesting additional DJI or Weibo applications. In this case, the much more common technique is to display the additional application in the Google Play Store app by linking to it from within your application. Then, if the user chooses to, they can install the application directly from the Google Play Store. Similarly, the self-updating components may only be used to provide users with the most up-to-date version of the application. However, this can be more easily accomplished through the Google Play Store.

In the worst case, these features can be used to target specific users with malicious updates or applications that could be used to exploit the user’s phone. Given the amount of user’s information retrieved from their device, DJI or Weibo would easily be able to identify specific targets of interest. The next step in exploiting these targets would be to suggest a new application (via the Weibo SDK) or update the DJI application with a customized version built specifically to exploit their device. Once their device has been exploited, it could be used to gather additional information from the phone, track the user via the phone’s various sensors, or be used as a springboard to attack other devices on the phone’s WiFi network. This targeting system would allow an attacker to be much stealthier with their exploitation, rather than much noisier techniques, such as exploiting all devices visiting a website.

DJI responds

DJI officials have published an exhaustive and vigorous response that said that all the features and components detailed in the reports either served legitimate purposes or were unilaterally removed and weren’t used maliciously.

“We design our systems so DJI customers have full control over how or whether to share their photos, videos and flight logs, and we support the creation of industry standards for drone data security that will provide protection and confidence for all drone users,” the statement said. It provided the following point-by-point discussion:

  • When our systems detect that a DJI app is not the official version – for example, if it has been modified to remove critical flight safety features like geofencing or altitude restrictions – we notify the user and require them to download the most recent official version of the app from our website. In future versions, users will also be able to download the official version from Google Play if it is available in their country. If users do not consent to doing so, their unauthorized (hacked) version of the app will be disabled for safety reasons.
  • Unauthorized modifications to DJI control apps have raised concerns in the past, and this technique is designed to help ensure that our comprehensive airspace safety measures are applied consistently.
  • Because our recreational customers often want to share their photos and videos with friends and family on social media, DJI integrates our consumer apps with the leading social media sites via their native SDKs. We must direct questions about the security of these SDKs to their respective social media services. However, please note that the SDK is only used when our users proactively turn it on.
  • DJI GO 4 is not able to restart itself without input from the user, and we are investigating why these researchers claim it did so. We have not been able to replicate this behavior in our tests so far.
  • The hypothetical vulnerabilities outlined in these reports are best characterized as potential bugs, which we have proactively tried to identify through our Bug Bounty Program, where security researchers responsibly disclose security issues they discover in exchange for payments of up to $30,000. Since all DJI flight control apps are designed to work in any country, we have been able to improve our software thanks to contributions from researchers all over the world, as seen on this list.
  • The MobTech and Bugly components identified in these reports were previously removed from DJI flight control apps after earlier researchers identified potential security flaws in them. Again, there is no evidence they were ever exploited, and they were not used in DJI’s flight control systems for government and professional customers.
  • The DJI GO4 app is primarily used to control our recreational drone products. DJI’s drone products designed for government agencies do not transmit data to DJI and are compatible only with a non-commercially available version of the DJI Pilot app. The software for these drones is only updated via an offline process, meaning this report is irrelevant to drones intended for sensitive government use. A recent security report from Booz Allen Hamilton audited these systems and found no evidence that the data or information collected by these drones is being transmitted to DJI, China, or any other unexpected party.
  • This is only the latest independent validation of the security of DJI products following reviews by the U.S. National Oceanic and Atmospheric Administration, U.S. cybersecurity firm Kivu Consulting, the U.S. Department of Interior and the U.S. Department of Homeland Security.
  • DJI has long called for the creation of industry standards for drone data security, a process which we hope will continue to provide appropriate protections for drone users with security concerns. If this type of feature, intended to assure safety, is a concern, it should be addressed in objective standards that can be specified by customers. DJI is committed to protecting drone user data, which is why we design our systems so drone users have control of whether they share any data with us. We also are committed to safety, trying to contribute technology solutions to keep the airspace safe.

Don’t forget the Android app mess

The research and DJI’s response underscore the disarray of Google’s current app procurement system. Ineffective vetting, the lack of permission granularity in older versions of Android, and the openness of the operating system make it easy to publish malicious apps in the Play Store. Those same things also make it easy to mistake legitimate functions for malicious ones.

People who have DJI Go 4 for Android installed may want to remove it at least until Google announces the results of its investigation (the reported automatic restart behavior means it’s not sufficient to simply curtail use of the app for the time being). Ultimately, users of the app find themselves in a similar position as that of TikTok, which has also aroused suspicions, both because of some behavior considered sketchy by some and because of its ownership by China-based ByteDance.

There’s little doubt that plenty of Android apps with no ties to China commit similar or worse infractions than those attributed to DJI Go 4 and TikTok. People who want to err on the side of security should steer clear of a large majority of them.

Microsoft is adding Linux, Android, and firmware protections to Windows

Screenshot of antivirus protection.

Microsoft is moving forward with its promise to extend enterprise security protections to non-Windows platforms with the general release of a Linux version and a preview of one for Android. The software maker is also beefing up Windows security protections to scan for malicious firmware.

The Linux and Android moves—detailed in posts published on Tuesday here, here, and here—follow a move last year to ship antivirus protections to macOS. Microsoft disclosed the firmware feature last week.

Premium pricing

All the new protections are available to users of Microsoft Advanced Threat Protection and require Windows 10 Enterprise Edition. Public pricing from Microsoft is either non-existent or difficult to find, but according to this site, costs range from $30 to $72 per machine per year to enterprise customers.

In February, when the Linux preview became available, Microsoft said it included antivirus alerts and “preventive capabilities.” Using a command line, admins can manage user machines, initiate and configure antivirus scans, monitor network events, and manage various threats.

“We are just at the beginning of our Linux journey and we are not stopping here!” Tuesday’s post announcing the Linux general availability said. “We are committed to continuous expansion of our capabilities for Linux and will be bringing you enhancements in the coming months.”

The Android preview, meanwhile, provides several protections, including:

  • The blocking of phishing sites and other high-risk domains and URLs accessed through SMS/text, WhatsApp, email, browsers, and other apps. The features use the same Microsoft Defender SmartScreen services that are already available for Windows so that decisions to block suspicious sites will apply across all devices on a network.
  • Proactive scanning for malicious or potentially unwanted applications and files that may be downloaded to a mobile device.
  • Measures to block access to network resources when devices show signs of being compromised with malicious apps or malware.
  • Integration to the same Microsoft Defender Security Center that’s already available for Windows, macOS, and Linux.

Last week, Microsoft said it had added firmware protection to the premium Microsoft Defender. The new offering scans Unified Extensible Firmware Interface, which is the successor to the traditional BIOS that most computers used during the boot process to locate and enumerate hardware installed.

The firmware scanner uses a new component added to virus protection already built into Defender. Hacks that infect firmware are particularly pernicious because they survive reinstallations of the operating system and other security measures. And because firmware runs before Windows starts, it has the ability to burrow deep into an infected system. Until now, there have been only limited ways to detect such attacks on large fleets of machines.

It makes sense that the extensions to non-Windows platforms are available only to enterprises and cost extra. I was surprised, however, that Microsoft is charging a premium for the firmware protection and only offering it to enterprises. Plenty of journalists, attorneys, and activists are equally if not more threatened by so-called evil maid attacks, in which a housekeeper or other stranger has the ability to tamper with firmware during brief physical access to a computer.

Microsoft has a strong financial incentive to make Windows secure for all users. Company representatives didn’t respond to an email asking if the firmware scanner will become more widely available.

Intel will soon bake anti-malware defenses directly into its CPUs

A mobile PC processor code-named Tiger Lake. It will be the first CPU to offer a security capability known as Control-Flow Enforcement Technology.

Enlarge / A mobile PC processor code-named Tiger Lake. It will be the first CPU to offer a security capability known as Control-Flow Enforcement Technology.
Intel

The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.

Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.

ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.

ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.

Very effective

Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.

CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.

“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”

Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.

Those who do not remember the past

Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.

A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.

It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:

One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.

Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.

Leaked images of new Samsung foldable surfaces: It’s a flip phone

We know Samsung has more foldable form factors planned after the Galaxy Fold, and it looks like one of them has popped up on Chinese social media. The pictures show a Samsung phone about the same size and shape as a Galaxy S10, but it folds in half like a flip phone. This would be Samsung’s answer to the 2020 Moto Razr.

We don’t actually see a Samsung logo in the pictures, but the device is using Samsung’s skin of Android with Samsung icons, Samsung Pay, Samsung’s navigation buttons, and hole-punch camera design that’s centered in the top of the display, just like Samsung’s latest devices.

The inside bezel seems like it’s getting the same treatment as the Galaxy Fold. While normal slab smartphones are entirely covered by glass and the bezel is just the edges of the display, on the Fold (and apparently on this device), the bezel is a physical piece of plastic that sits on top of the display. One picture shows that the bezel sticks up above the display just a little bit, just like the Fold, allowing you to snap the phone closed without having the display touch itself. The physical plastic bezel also serves to cover the perimeter of the display, blocking dust, fingers, and anything else from getting behind the delicate, flexible display. On the Galaxy Fold, with its giant 7.3-inch display, the bezels don’t look too out of place. This normal-sized smartphone gets the same size bezels, though, and on this smaller device, they are a lot bigger than the usual smartphone bezels.

While most 2019 smartphones came with an in-screen fingerprint reader, the Galaxy Fold didn’t. Instead, Samsung opted for a side-mounted fingerprint reader. We don’t get a clear look at the right side of the device, but in the picture that shows the front display, you can make out what looks like a volume rocker and power button. The power button doesn’t stick out as much as the volume rocker, though. This is consistent with what another side-mounted fingerprint reader would look like—they are wide, flat, touch-sensitive areas that don’t stick out as much as a normal button. Of course, this is pre-release software, but we don’t see an in-screen fingerprint reader icon on the lock screen, which is another piece of evidence backing up the side-mounted fingerprint reader option. There’s also probably a number of conflicts with in-screen fingerprint reader technology and the flexible, bending, moving display.

The phone folds up into a cute little square, and we can see a display of some kind on the cover, along with two cameras and what would be the main, “rear” cameras when the phone is open. We can’t be totally sure how big the cover display is, but it looks small. In the picture, it shows only the time, date, and battery level, and there’s enough light bleed that you can sort of make out the shape of the display. The glow around the letters looks to be only as big as the camera array.

The 2020 Moto Razr wowed us with its fancy hinge design that allowed the phone to fold but didn’t crease the display. We don’t get a clear picture of the display mechanism here, but there are hints that it is similar to the Galaxy Fold. The closed picture shows the spine of the hinge peeking out of the top of the device, which is exactly what the Galaxy Fold hinge looks like. We can’t see a crease in any of the pictures, but the pictures are pretty blurry, and it’s possible that the phone just hasn’t been closed and creased at the time of these pictures.

Lastly we have the bottom, where we can see a bottom-firing speaker, a USB-C port, and no headphone jack. The Galaxy Fold was announced alongside the Galaxy S10, so maybe this foldable will get a similar announcement next to the Galaxy S11, which should be sometime in February.

How to set up your own Nebula mesh VPN, step by step

Nebula, sadly, does not come with its own gallery of awesome high-res astronomy photos.

Enlarge / Nebula, sadly, does not come with its own gallery of awesome high-res astronomy photos.

Last week, we covered the launch of Slack Engineering’s open source mesh VPN system, Nebula. Today, we’re going to dive a little deeper into how you can set up your own Nebula private mesh network—along with a little more detail about why you might (or might not) want to.

VPN mesh versus traditional VPNs

The biggest selling point of Nebula is that it’s not “just” a VPN, it’s a distributed VPN mesh. A conventional VPN is much simpler than a mesh and uses a simple star topology: all clients connect to a server, and any additional routing is done manually on top of that. All VPN traffic has to flow through that central server, whether it makes sense in the grander scheme of things or not.

In sharp contrast, a mesh network understands the layout of all its member nodes and routes packets between them intelligently. If node A is right next to node Z, the mesh won’t arbitrarily route all of its traffic through node M in the middle—it’ll just send them from A to Z directly, without middlemen or unnecessary overhead. We can examine the differences with a network flow diagram demonstrating patterns in a small virtual private network.

With Nebula, connections can go directly from home/office to hotel and vice versa—and two PCs on the same LAN don't need to leave the LAN at all.

Enlarge / With Nebula, connections can go directly from home/office to hotel and vice versa—and two PCs on the same LAN don’t need to leave the LAN at all.
Jim Salter

All VPNs work in part by exploiting the bi-directional nature of network tunnels. Once a tunnel has been established—even through Network Address Translation (NAT)—it’s bidirectional, regardless of which side initially reached out. This is true for both mesh and conventional VPNs—if two machines on different networks punch tunnels outbound to a cloud server, the cloud server can then tie those two tunnels together, providing a link with two hops. As long as you’ve got that one public IP answering to VPN connection requests, you can get files from one network to another—even if both endpoints are behind NAT with no port forwarding configured.

Where Nebula becomes more efficient is when two Nebula-connected machines are closer to each other than they are to the central cloud server. When a Nebula node wants to connect to another Nebula node, it’ll query a central server—what Nebula calls a lighthouse—to ask where that node can be found. Once the location has been gotten from the lighthouse, the two nodes can work out between themselves what the best route to one another might be. Typically, they’ll be able to communicate with one another directly rather than going through the router—even if they’re behind NAT on two different networks, neither of which has port forwarding enabled.

By contrast, connections between any two PCs on a traditional VPN must pass through its central server—adding bandwidth to that server’s monthly allotment and potentially degrading both throughput and latency from peer to peer.

Direct connection through UDP skullduggery

Nebula can—in most cases—establish a tunnel directly between two different NATted networks, without the need to configure port forwarding on either side. This is a little brain-breaking—normally, you wouldn’t expect two machines behind NAT to be able to contact each other without an intermediary. But Nebula is a UDP-only protocol, and it’s willing to cheat to achieve its goals.

If both machines reach the lighthouse, the lighthouse knows the source UDP port for each side’s outbound connection. The lighthouse can then inform one node of the other’s source UDP port, and vice versa. By itself, this isn’t enough to make it back through the NAT pinhole—but if each side targets the other’s NAT pinhole and spoofs the lighthouse’s public IP address as being the source, their packets will make it through.

UDP is a stateless connection, and very few networks bother to check for and enforce boundary validation on UDP packets—so this source-address spoofing works, more often than not. However, some more advanced firewalls may check the headers on outbound packets and drop them if they have impossible source addresses.

If only one side has a boundary-validating firewall that drops spoofed outbound packets, you’re fine. But if both ends have boundary validation available, configured, and enabled, Nebula will either fail or be forced to fall back to routing through the lighthouse.

We specifically tested this and can confirm that a direct tunnel from one LAN to another across the Internet worked, with no port forwarding and no traffic routed through the lighthouse. We tested with one node behind an Ubuntu homebrew router, another behind a Netgear Nighthawk on the other side of town, and a lighthouse running on a Linode instance. Running iftop on the lighthouse showed no perceptible traffic, even though a 20Mbps iperf3 stream was cheerfully running between the two networks. So right now, in most cases, direct point-to-point connections using forged source IP addresses should work.

Setting Nebula up

To set up a Nebula mesh, you’ll need at least two nodes, one of which should be a lighthouse. Lighthouse nodes must have a public IP address—preferably, a static one. If you use a lighthouse behind a dynamic IP address, you’ll likely end up with some unavoidable frustration if and when that dynamic address updates.

The best lighthouse option is a cheap VM at the cloud provider of your choice. The $5/mo offerings at Linode or Digital Ocean are more than enough to handle the traffic and CPU levels you should expect, and it’s quick and easy to open an account and get one set up. We recommend the latest Ubuntu LTS release for your new lighthouse’s operating system; at press time that’s 18.04.

Installation

Nebula doesn’t actually have an installer; it’s just two bare command line tools in a tarball, regardless of your operating system. For that reason, we’re not going to give operating system specific instructions here: the commands and arguments are the same on Linux, MacOS, or Windows. Just download the appropriate tarball from the Nebula release page, open it up (Windows users will need 7zip for this), and dump the commands inside wherever you’d like them to be.

On Linux or MacOS systems, we recommend creating an /opt/nebula folder for your Nebula commands, keys, and configs—if you don’t have an /opt yet, that’s okay, just create it, too. On Windows, C:Program FilesNebula is probably a more sensible location.

Certificate Authority configuration and key generation

The first thing you’ll need to do is create a Certificate Authority using the nebula-cert program. Nebula, thankfully, makes this a mind-bogglingly simple process:

[email protected]:/opt/nebula# ./nebula-cert ca -name "My Shiny Nebula Mesh Network"

What you’ve actually done is create a certificate and key for the entire network. Using that key, you can sign keys for each node itself. Unlike the CA certificate, node certificates need to have the Nebula IP address for each node baked into them when they’re created. So stop for a minute and think about what subnet you’d like to use for your Nebula mesh. It should be a private subnet—so it doesn’t conflict with any Internet resources you might need to use—and it should be an oddball one so that it won’t conflict with any LANs you happen to be on.

Nice, round numbers like 192.168.0.x, 192.168.1.x, 192.168.254.x, and 10.0.0.x should be right out, as the odds are extremely good you’ll stay at a hotel, friend’s house, etc that uses one of those subnets. We went with 192.168.98.x—but feel free to get more random than that. Your lighthouse will occupy .1 on whatever subnet you choose, and you will allocate new addresses for nodes as you create their keys. Let’s go ahead and set up keys for our lighthouse and nodes now:

[email protected]:/opt/nebula# ./nebula-cert sign -name "lighthouse" -ip "192.168.98.1/24"
[email protected]:/opt/nebula# ./nebula-cert sign -name "banshee" -ip "192.168.98.2/24"
[email protected]:/opt/nebula# ./nebula-cert sign -name "locutus" -ip "192.168.98.3/24"

Now that you’ve generated all your keys, consider getting them the heck out of your lighthouse, for security. You need the ca.key file only when actually signing new keys, not to run Nebula itself. Ideally, you should move ca.key out of your working directory entirely to a safe place—maybe even a safe place that isn’t connected to Nebula at all—and only restore it temporarily if and as you need it. Also note that the lighthouse itself doesn’t need to be the machine that runs nebula-cert—if you’re feeling paranoid, it’s even better practice to do CA stuff from a completely separate box and just copy the keys and certs out as you create them.

Each Nebula node does need a copy of ca.crt, the CA certificate. It also needs its own .key and .crt, matching the name you gave it above. You don’t need any other node’s key or certificate, though—the nodes can exchange them dynamically as needed—and for security best practice, you really shouldn’t keep all the .key and .crt files in one place. (If you lose one, you can always just generate another that uses the same name and Nebula IP address from your CA later.)

Configuring Nebula with config.yml

Nebula’s Github repo offers a sample config.yml with pretty much every option under the sun and lots of comments wrapped around them, and we absolutely recommend anyone interested poke through it  see to all the things that can be done. However, if you just want to get things moving, it may be easier to start with a drastically simplified config that has nothing but what you need.

Lines that begin with a hashtag are commented out and not interpreted.

#
# This is Ars Technica's sample Nebula config file.
#
pki:
  # every node needs a copy of the CA certificate,
  # and its own certificate and key, ONLY.
  #
  ca: /opt/nebula/ca.crt
  cert: /opt/nebula/lighthouse.crt
  key: /opt/nebula/lighthouse.key
static_host_map:
 # how to find one or more lighthouse nodes
 # you do NOT need every node to be listed here!
 #
 # format "Nebula IP": ["public IP or hostname:port"]
 #
 "192.168.98.1": ["nebula.arstechnica.com:4242"]
lighthouse:
  interval: 60
  # if you're a lighthouse, say you're a lighthouse
  #
  am_lighthouse: true
  hosts:
    # If you're a lighthouse, this section should be EMPTY
    # or commented out. If you're NOT a lighthouse, list
    # lighthouse nodes here, one per line, in the following
    # format:
    #
    # - "192.168.98.1"
listen:
  # 0.0.0.0 means "all interfaces," which is probably what you want
  #
  host: 0.0.0.0
  port: 4242
# "punchy" basically means "send frequent keepalive packets"
# so that your router won't expire and close your NAT tunnels.
#
punchy: true
# "punch_back" allows the other node to try punching out to you,
# if you're having trouble punching out to it. Useful for stubborn
# networks with symmetric NAT, etc.
#
punch_back: true
tun:
  # sensible defaults. don't monkey with these unless
  # you're CERTAIN you know what you're doing.
  #
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
logging:
  level: info
  format: text
# you NEED this firewall section.
#
# Nebula has its own firewall in addition to anything
# your system has in place, and it's all default deny.
#
# So if you don't specify some rules here, you'll drop
# all traffic, and curse and wonder why you can't ping
# one node from another.
#
firewall:
  conntrack:
    tcp_timeout: 120h
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000
# since everything is default deny, all rules you
# actually SPECIFY here are allow rules.
#
  outbound:
    - port: any
      proto: any
      host: any
  inbound:
    - port: any
      proto: any
      host: any

Warning: our CMS is mangling some of the whitespace in this code, so don’t try to copy and paste it directly. Instead, get working, guaranteed-whitespace-proper copies from Github: config.lighthouse.yaml and config.node.yaml.

There isn’t much different between lighthouse and normal node configs. If the node is not to be a lighthouse, just set am_lighthouse to false, and uncomment (remove the leading hashtag from) the line # - "192.168.98.1", which points the node to the lighthouse it should report to.

Note that the lighthouse:hosts list uses the nebula IP of the lighthouse node, not its real-world public IP! The only place real-world IP addresses should show up is in the static_host_map section.

Starting nebula on each node

I hope you Windows and Mac types weren’t expecting some sort of GUI—or an applet in the dock or system tray, or a preconfigured service or daemon—because you’re not getting one. Grab a terminal—a command prompt run as Administrator, for you Windows folks—and run nebula against its config file. Minimize the terminal/command prompt window after you run it.

[email protected]:/opt/nebula# ./nebula -config ./config.yml

That’s all you get. If you left the logging set at info the way we have it in our sample config files, you’ll see a bit of informational stuff scroll up as your nodes come online and begin figuring out how to contact one another.

If you’re a Linux or Mac user, you might also consider using the screen utility to hide nebula away from your normal console or terminal (and keep it from closing when that session terminates).

Figuring out how to get Nebula to start automatically is, unfortunately, an exercise we’ll need to leave for the user—it’s different from distro to distro on Linux (mostly depending on whether you’re using systemd or init). Advanced Windows users should look into running Nebula as a custom service, and Mac folks should call Senior Technology Editor Lee Hutchinson on his home phone and ask him for help directly.

Conclusion

Nebula is a pretty cool project. We love that it’s open source, that it uses the Noise platform for crypto, that it’s available on all three major desktop platforms, and that it’s easy…ish to set up and use.

With that said, Nebula in its current form is really not for people afraid to get their hands dirty on the command line—not just once, but always. We have a feeling that some real UI and service scaffolding will show up eventually—but until it does, as compelling as it is, it’s not ready for “normal users.”

Right now, Nebula’s probably best used by sysadmins and hobbyists who are determined to take advantage of its dynamic routing and don’t mind the extremely visible nuts and bolts and lack of anything even faintly like a friendly interface. We definitely don’t recommend it in its current form to “normal users”—whether that means yourself or somebody you need to support.

Unless you really, really need that dynamic point-to-point routing, a more conventional VPN like WireGuard is almost certainly a better bet for the moment.

The Good

  • Free and open source software, released under the MIT license
  • Cross platform—looks and operates exactly the same on Windows, Mac, and Linux
  • Reasonably fast—our Ryzen 7 3700X managed 1.7Gbps from itself to one of its own VMs across Nebula
  • Point-to-point tunneling means near-zero bandwidth needed at lighthouses
  • Dynamic routing opens interesting possibilities for portable systems
  • Simple, accessible logging makes Nebula troubleshooting a bit easier than WireGuard troubleshooting

The Bad

  • No Android or iOS support yet
  • No service/daemon wrapper included
  • No UI, launcher, applet, etc

The Ugly

  • Did we mention the complete lack of scaffolding? Please don’t ask non-technical people to use this yet
  • The Windows port requires the OpenVPN project’s tap-windows6 driver—which is, unfortunately, notoriously buggy and cantankerous
  • “Reasonably fast” is relative—most PCs should saturate gigabit links easily enough, but WireGuard is at least twice as fast as Nebula on Linux