Firewalls aren’t just for corporate networks. Large numbers of security- or privacy-conscious people also use them to filter or redirect traffic flowing in and out of their computers. Apple recently made a major change to macOS that frustrates these efforts.
Beginning with macOS Catalina released last year, Apple added a list of 50 Apple-specific apps and processes that were to be exempted from firewalls like Little Snitch and Lulu. The undocumented exemption, which didn’t take effect until firewalls were rewritten to implement changes in Big Sur, first came to light in October. Patrick Wardle, a security researcher at Mac and iOS enterprise developer Jamf, further documented the new behavior over the weekend.
In Big Sur Apple decided to exempt many of its apps from being routed thru the frameworks they now require 3rd-party firewalls to use (LuLu, Little Snitch, etc.) 🧐
Q: Could this be (ab)used by malware to also bypass such firewalls? 🤔
A: Apparently yes, and trivially so 😬😱😭 pic.twitter.com/CCNcnGPFIB
— patrick wardle (@patrickwardle) November 14, 2020
To demonstrate the risks that come with this move, Wardle—a former hacker for the NSA—demonstrated how malware developers could exploit the change to make an end-run around a tried-and-true security measure. He set Lulu and Little Snitch to block all outgoing traffic on a Mac running Big Sur and then ran a small programming script that had exploit code interact with one of the apps that Apple exempted. The python script had no trouble reaching a command and control server he set up to simulate one commonly used by malware to exfiltrate sensitive data.
“It kindly asked (coerced?) one of the trusted Apple items to generate network traffic to an attacker-controlled server and could (ab)use this to exfiltrate files,” Wardle, referring to the script, told me. “Basically, ‘Hey, Mr. Apple Item, can you please send this file to Patrick’s remote server?’ And it would kindly agree. And since the traffic was coming from the trusted item, it would never be routed through the firewall… meaning the firewall is 100% blind.”
Wardle tweeted a portion of a bug report he submitted to Apple during the Big Sur beta phase. It specifically warns that “essential security tools such as firewalls are ineffective” under the change.
Apple has yet to explain the reason behind the change. Firewall misconfigurations are often the source of software not working properly. One possibility is that Apple implemented the move to reduce the number of support requests it receives and make the Mac experience better for people not schooled in setting up effective firewall rules. It’s not unusual for firewalls to exempt their own traffic. Apple may be applying the same rationale.
But the inability to override the settings violates a core tenet that people ought to be able to selectively restrict traffic flowing from their own computers. In the event that a Mac does become infected, the change also gives hackers a way to bypass what for many is an effective mitigation against such attacks.
“The issue I see is that it opens the door for doing exactly what Patrick demoed… malware authors can use this to sneak data around a firewall,” Thomas Reed, director of Mac and mobile offerings at security firm Malwarebytes, said. “Plus, there’s always the potential that someone may have a legitimate need to block some Apple traffic for some reason, but this takes away that ability without using some kind of hardware network filter outside the Mac.”
People who want to know what apps and processes are exempt can open the macOS terminal and enter
sudo defaults read /System/Library/Frameworks/NetworkExtension.framework/Resources/Info.plist ContentFilterExclusionList.
The change came as Apple deprecated macOS kernel extensions, which software developers used to make apps interact directly with the OS. The deprecation included NKEs—short for network kernel extensions—that third-party firewall products used to monitor incoming and outgoing traffic.
In place of NKEs, Apple introduced a new user-mode framework called the Network Extension Framework. To run on Big Sur, all third-party firewalls that used NKEs had to be rewritten to use the new framework.
Apple representatives didn’t respond to emailed questions about this change. This post will be updated if they respond later. In the meantime, people who want to override this new exemption will have to find alternatives. As Reed noted above, one option is to rely on a network filter that runs from outside their Mac. Another possibility is to rely on PF, or Packet Filter firewall built into macOS.
A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.
The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.
But that’s not all
It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.
Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.
“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.
In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.
Patch on the way
The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.
In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:
We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.
It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.
As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.
Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.
This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.
Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.
The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.
Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.
One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.
Sand’s suspicions were further aroused when he found intents with the following names:
After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.
“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”
Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.
As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.
Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.
The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.
A recently released tool is letting anyone exploit an unusual Mac vulnerability to bypass Apple’s trusted T2 security chip and gain deep system access. The flaw is one researchers have also been using for more than a year to jailbreak older models of iPhones. But the fact that the T2 chip is vulnerable in the same way creates a new host of potential threats. Worst of all, while Apple may be able to slow down potential hackers, the flaw is ultimately unfixable in every Mac that has a T2 inside.
In general, the jailbreak community hasn’t paid as much attention to macOS and OS X as it has iOS, because they don’t have the same restrictions and walled gardens that are built into Apple’s mobile ecosystem. But the T2 chip, launched in 2017, created some limitations and mysteries. Apple added the chip as a trusted mechanism for securing high-value features like encrypted data storage, Touch ID, and Activation Lock, which works with Apple’s “Find My” services. But the T2 also contains a vulnerability, known as Checkm8, that jailbreakers have already been exploiting in Apple’s A5 through A11 (2011 to 2017) mobile chipsets. Now Checkra1n, the same group that developed the tool for iOS, has released support for T2 bypass.
On Macs, the jailbreak allows researchers to probe the T2 chip and explore its security features. It can even be used to run Linux on the T2 or play Doom on a MacBook Pro’s Touch Bar. The jailbreak could also be weaponized by malicious hackers, though, to disable macOS security features like System Integrity Protection and Secure Boot and install malware. Combined with another T2 vulnerability that was publicly disclosed in July by the Chinese security research and jailbreaking group Pangu Team, the jailbreak could also potentially be used to obtain FileVault encryption keys and to decrypt user data. The vulnerability is unpatchable, because the flaw is in low-level, unchangeable code for hardware.
“The T2 is meant to be this little secure black box in Macs—a computer inside your computer, handling things like Lost Mode enforcement, integrity checking, and other privileged duties,” says Will Strafach, a longtime iOS researcher and creator of the Guardian Firewall app for iOS. “So the significance is that this chip was supposed to be harder to compromise—but now it’s been done.”
Apple did not respond to WIRED’s requests for comment.
There are a few important limitations of the jailbreak, though, that keep this from being a full-blown security crisis. The first is that an attacker would need physical access to target devices in order to exploit them. The tool can only run off of another device over USB. This means hackers can’t remotely mass-infect every Mac that has a T2 chip. An attacker could jailbreak a target device and then disappear, but the compromise isn’t “persistent”; it ends when the T2 chip is rebooted. The Checkra1n researchers do caution, though, that the T2 chip itself doesn’t reboot every time the device does. To be certain that a Mac hasn’t been compromised by the jailbreak, the T2 chip must be fully restored to Apple’s defaults. Finally, the jailbreak doesn’t give an attacker instant access to a target’s encrypted data. It could allow hackers to install keyloggers or other malware that could later grab the decryption keys, or it could make it easier to brute-force them, but Checkra1n isn’t a silver bullet.
“There are plenty of other vulnerabilities, including remote ones that undoubtedly have more impact on security,” a Checkra1n team member tweeted on Tuesday.
In a discussion with WIRED, the Checkra1n researchers added that they see the jailbreak as a necessary tool for transparency about T2. “It’s a unique chip, and it has differences from iPhones, so having open access is useful to understand it at a deeper level,” a group member said. “It was a complete black box before, and we are now able to look into it and figure out how it works for security research.”
The exploit also comes as little surprise; it’s been apparent since the original Checkm8 discovery last year that the T2 chip was also vulnerable in the same way. And researchers point out that while the T2 chip debuted in 2017 in top-tier iMacs, it only recently rolled out across the entire Mac line. Older Macs with a T1 chip are unaffected. Still, the finding is significant because it undermines a crucial security feature of newer Macs.
Jailbreaking has long been a gray area because of this tension. It gives users freedom to install and modify whatever they want on their devices, but it is achieved by exploiting vulnerabilities in Apple’s code. Hobbyists and researchers use jailbreaks in constructive ways, including to conduct more security testing and potentially help Apple fix more bugs, but there’s always the chance that attackers could weaponize jailbreaks for harm.
“I had already assumed that since T2 was vulnerable to Checkm8, it was toast,” says Patrick Wardle, an Apple security researcher at the enterprise management firm Jamf and a former NSA researcher. “There really isn’t much that Apple can do to fix it. It’s not the end of the world, but this chip, which was supposed to provide all this extra security, is now pretty much moot.”
Wardle points out that for companies that manage their devices using Apple’s Activation Lock and Find My features, the jailbreak could be particularly problematic both in terms of possible device theft and other insider threats. And he notes that the jailbreak tool could be a valuable jumping off point for attackers looking to take a shortcut to developing potentially powerful attacks. “You likely could weaponize this and create a lovely in-memory implant that, by design, disappears on reboot,” he says. This means that the malware would run without leaving a trace on the hard drive and would be difficult for victims to track down.
The situation raises much deeper issues, though, with the basic approach of using a special, trusted chip to secure other processes. Beyond Apple’s T2, numerous other tech vendors have tried this approach and had their secure enclaves defeated, including Intel, Cisco, and Samsung.
“Building in hardware ‘security’ mechanisms is just always a double-edged sword,” says Ang Cui, founder of the embedded device security firm Red Balloon. “If an attacker is able to own the secure hardware mechanism, the defender usually loses more than they would have if they had built no hardware. It’s a smart design in theory, but in the real world it usually backfires.”
In this case, you’d likely have to be a very high-value target to register any real alarm. But hardware-based security measures do create a single point of failure that the most important data and systems rely on. Even if the Checkra1n jailbreak doesn’t provide unlimited access for attackers, it gives them more than anyone would want.
This story originally appeared on wired.com.
The Android version of DJI Go 4—an app that lets users control drones—has until recently been covertly collecting sensitive user data and can download and execute code of the developers’ choice, researchers said in two reports that question the security and trustworthiness of a program with more than 1 million Google Play downloads.
The app is used to control and collect near real-time video and flight data from drones made by China-based DJI, the world’s biggest maker of commercial drones. The Play Store shows that it has more than 1 million downloads, but because of the way Google discloses numbers, the true number could be as high as 5 million. The app has a rating of three-and-a-half stars out of a possible total of five from more than 52,000 users.
Wide array of sensitive user data
Two weeks ago, security firm Synactive reverse-engineered the app. On Thursday, fellow security firm Grimm published the results of its own independent analysis. At a minimum, both found that the app skirted Google terms and that, until recently, the app covertly collected a wide array of sensitive user data and sent it to servers located in mainland China. A worst-case scenario is that developers are abusing hard-to-identify features to spy on users.
According to the reports, the suspicious behaviors include:
- The ability to download and install any application of the developers’ choice through either a self-update feature or a dedicated installer in a software development kit provided by China-based social media platform Weibo. Both features could download code outside of Play, in violation of Google’s terms.
- A recently removed component that collected a wealth of phone data including IMEI, IMSI, carrier name, SIM serial Number, SD card information, OS language, kernel version, screen size and brightness, wireless network name, address and MAC, and Bluetooth addresses. These details and more were sent to MobTech, maker of a software developer kit used until the most recent release of the app.
- Automatic restarts whenever a user swiped the app to close it. The restarts cause the app to run in the background and continue to make network requests.
- Advanced obfuscation techniques that make third-party analysis of the app time-consuming.
This month’s reports come three years after the US Army banned the use of DJI drones for reasons that remain classified. In January, the Interior Department grounded drones from DJI and other Chinese manufacturers out of concerns data could be sent back to the mainland.
DJI officials said the researchers found “hypothetical vulnerabilities” and that neither report provided any evidence that they were ever exploited.
“The app update function described in these reports serves the very important safety goal of mitigating the use of hacked apps that seek to override our geofencing or altitude limitation features,” they wrote in a statement. Geofencing is a virtual barrier that the Federal Aviation Administration or other authorities bar drones from crossing. Drones use GPS, Bluetooth, and other technologies to enforce the restrictions.
A Google spokesman said the company is looking into the reports. The researchers said the iOS version of the app contained no obfuscation or update mechanisms.
Obfuscated, acquisitive, and always on
In several respects, the researchers said, DJI Go 4 for Android mimicked the behavior of botnets and malware. Both the self-update and auto-install components, for instance, call a developer-designated server and await commands to download and install code or apps. The obfuscation techniques closely resembled those used by malware to prevent researchers from discovering its true purpose. Other similarities were an always-on status and the collection of sensitive data that wasn’t relevant or necessary for the stated purpose of flying drones.
Making the behavior more concerning is the breadth of permissions required to use the app, which include access to contacts, microphone, camera, location, storage, and the ability to change network connectivity. Such sprawling permissions meant that the servers of DJI or Weibo, both located in a country known for its government-sponsored espionage hacking, had almost full control over users’ devices, the researchers said.
Both research teams said they saw no evidence the app installer was ever actually used, but they did see the automatic update mechanism trigger and download a new version from the DJI server and install it. The download URLs for both features are dynamically generated, meaning they are provided by a remote server and can be changed at any time.
The researchers from both firms conducted experiments that showed how both mechanisms could be used to install arbitrary apps. While the programs were delivered automatically, the researchers still had to click their approval before the programs could be installed.
Both research reports stopped short of saying the app actually targeted individuals, and both noted that the collection of IMSIs and other data had ended with the release of current version 4.3.36. The teams, however, didn’t rule out the possibility of nefarious uses. Grimm researchers wrote:
In the best case scenario, these features are only used to install legitimate versions of applications that may be of interest to the user, such as suggesting additional DJI or Weibo applications. In this case, the much more common technique is to display the additional application in the Google Play Store app by linking to it from within your application. Then, if the user chooses to, they can install the application directly from the Google Play Store. Similarly, the self-updating components may only be used to provide users with the most up-to-date version of the application. However, this can be more easily accomplished through the Google Play Store.
In the worst case, these features can be used to target specific users with malicious updates or applications that could be used to exploit the user’s phone. Given the amount of user’s information retrieved from their device, DJI or Weibo would easily be able to identify specific targets of interest. The next step in exploiting these targets would be to suggest a new application (via the Weibo SDK) or update the DJI application with a customized version built specifically to exploit their device. Once their device has been exploited, it could be used to gather additional information from the phone, track the user via the phone’s various sensors, or be used as a springboard to attack other devices on the phone’s WiFi network. This targeting system would allow an attacker to be much stealthier with their exploitation, rather than much noisier techniques, such as exploiting all devices visiting a website.
DJI officials have published an exhaustive and vigorous response that said that all the features and components detailed in the reports either served legitimate purposes or were unilaterally removed and weren’t used maliciously.
“We design our systems so DJI customers have full control over how or whether to share their photos, videos and flight logs, and we support the creation of industry standards for drone data security that will provide protection and confidence for all drone users,” the statement said. It provided the following point-by-point discussion:
- When our systems detect that a DJI app is not the official version – for example, if it has been modified to remove critical flight safety features like geofencing or altitude restrictions – we notify the user and require them to download the most recent official version of the app from our website. In future versions, users will also be able to download the official version from Google Play if it is available in their country. If users do not consent to doing so, their unauthorized (hacked) version of the app will be disabled for safety reasons.
- Unauthorized modifications to DJI control apps have raised concerns in the past, and this technique is designed to help ensure that our comprehensive airspace safety measures are applied consistently.
- Because our recreational customers often want to share their photos and videos with friends and family on social media, DJI integrates our consumer apps with the leading social media sites via their native SDKs. We must direct questions about the security of these SDKs to their respective social media services. However, please note that the SDK is only used when our users proactively turn it on.
- DJI GO 4 is not able to restart itself without input from the user, and we are investigating why these researchers claim it did so. We have not been able to replicate this behavior in our tests so far.
- The hypothetical vulnerabilities outlined in these reports are best characterized as potential bugs, which we have proactively tried to identify through our Bug Bounty Program, where security researchers responsibly disclose security issues they discover in exchange for payments of up to $30,000. Since all DJI flight control apps are designed to work in any country, we have been able to improve our software thanks to contributions from researchers all over the world, as seen on this list.
- The MobTech and Bugly components identified in these reports were previously removed from DJI flight control apps after earlier researchers identified potential security flaws in them. Again, there is no evidence they were ever exploited, and they were not used in DJI’s flight control systems for government and professional customers.
- The DJI GO4 app is primarily used to control our recreational drone products. DJI’s drone products designed for government agencies do not transmit data to DJI and are compatible only with a non-commercially available version of the DJI Pilot app. The software for these drones is only updated via an offline process, meaning this report is irrelevant to drones intended for sensitive government use. A recent security report from Booz Allen Hamilton audited these systems and found no evidence that the data or information collected by these drones is being transmitted to DJI, China, or any other unexpected party.
- This is only the latest independent validation of the security of DJI products following reviews by the U.S. National Oceanic and Atmospheric Administration, U.S. cybersecurity firm Kivu Consulting, the U.S. Department of Interior and the U.S. Department of Homeland Security.
- DJI has long called for the creation of industry standards for drone data security, a process which we hope will continue to provide appropriate protections for drone users with security concerns. If this type of feature, intended to assure safety, is a concern, it should be addressed in objective standards that can be specified by customers. DJI is committed to protecting drone user data, which is why we design our systems so drone users have control of whether they share any data with us. We also are committed to safety, trying to contribute technology solutions to keep the airspace safe.
Don’t forget the Android app mess
The research and DJI’s response underscore the disarray of Google’s current app procurement system. Ineffective vetting, the lack of permission granularity in older versions of Android, and the openness of the operating system make it easy to publish malicious apps in the Play Store. Those same things also make it easy to mistake legitimate functions for malicious ones.
People who have DJI Go 4 for Android installed may want to remove it at least until Google announces the results of its investigation (the reported automatic restart behavior means it’s not sufficient to simply curtail use of the app for the time being). Ultimately, users of the app find themselves in a similar position as that of TikTok, which has also aroused suspicions, both because of some behavior considered sketchy by some and because of its ownership by China-based ByteDance.
There’s little doubt that plenty of Android apps with no ties to China commit similar or worse infractions than those attributed to DJI Go 4 and TikTok. People who want to err on the side of security should steer clear of a large majority of them.
Microsoft is moving forward with its promise to extend enterprise security protections to non-Windows platforms with the general release of a Linux version and a preview of one for Android. The software maker is also beefing up Windows security protections to scan for malicious firmware.
The Linux and Android moves—detailed in posts published on Tuesday here, here, and here—follow a move last year to ship antivirus protections to macOS. Microsoft disclosed the firmware feature last week.
All the new protections are available to users of Microsoft Advanced Threat Protection and require Windows 10 Enterprise Edition. Public pricing from Microsoft is either non-existent or difficult to find, but according to this site, costs range from $30 to $72 per machine per year to enterprise customers.
In February, when the Linux preview became available, Microsoft said it included antivirus alerts and “preventive capabilities.” Using a command line, admins can manage user machines, initiate and configure antivirus scans, monitor network events, and manage various threats.
“We are just at the beginning of our Linux journey and we are not stopping here!” Tuesday’s post announcing the Linux general availability said. “We are committed to continuous expansion of our capabilities for Linux and will be bringing you enhancements in the coming months.”
The Android preview, meanwhile, provides several protections, including:
- The blocking of phishing sites and other high-risk domains and URLs accessed through SMS/text, WhatsApp, email, browsers, and other apps. The features use the same Microsoft Defender SmartScreen services that are already available for Windows so that decisions to block suspicious sites will apply across all devices on a network.
- Proactive scanning for malicious or potentially unwanted applications and files that may be downloaded to a mobile device.
- Measures to block access to network resources when devices show signs of being compromised with malicious apps or malware.
- Integration to the same Microsoft Defender Security Center that’s already available for Windows, macOS, and Linux.
Last week, Microsoft said it had added firmware protection to the premium Microsoft Defender. The new offering scans Unified Extensible Firmware Interface, which is the successor to the traditional BIOS that most computers used during the boot process to locate and enumerate hardware installed.
The firmware scanner uses a new component added to virus protection already built into Defender. Hacks that infect firmware are particularly pernicious because they survive reinstallations of the operating system and other security measures. And because firmware runs before Windows starts, it has the ability to burrow deep into an infected system. Until now, there have been only limited ways to detect such attacks on large fleets of machines.
It makes sense that the extensions to non-Windows platforms are available only to enterprises and cost extra. I was surprised, however, that Microsoft is charging a premium for the firmware protection and only offering it to enterprises. Plenty of journalists, attorneys, and activists are equally if not more threatened by so-called evil maid attacks, in which a housekeeper or other stranger has the ability to tamper with firmware during brief physical access to a computer.
Microsoft has a strong financial incentive to make Windows secure for all users. Company representatives didn’t respond to an email asking if the firmware scanner will become more widely available.
The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.
Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.
ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.
ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.
Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.
CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.
“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”
Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.
Those who do not remember the past
Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.
A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.
It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:
One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.
Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.
We know Samsung has more foldable form factors planned after the Galaxy Fold, and it looks like one of them has popped up on Chinese social media. The pictures show a Samsung phone about the same size and shape as a Galaxy S10, but it folds in half like a flip phone. This would be Samsung’s answer to the 2020 Moto Razr.
We don’t actually see a Samsung logo in the pictures, but the device is using Samsung’s skin of Android with Samsung icons, Samsung Pay, Samsung’s navigation buttons, and hole-punch camera design that’s centered in the top of the display, just like Samsung’s latest devices.
The inside bezel seems like it’s getting the same treatment as the Galaxy Fold. While normal slab smartphones are entirely covered by glass and the bezel is just the edges of the display, on the Fold (and apparently on this device), the bezel is a physical piece of plastic that sits on top of the display. One picture shows that the bezel sticks up above the display just a little bit, just like the Fold, allowing you to snap the phone closed without having the display touch itself. The physical plastic bezel also serves to cover the perimeter of the display, blocking dust, fingers, and anything else from getting behind the delicate, flexible display. On the Galaxy Fold, with its giant 7.3-inch display, the bezels don’t look too out of place. This normal-sized smartphone gets the same size bezels, though, and on this smaller device, they are a lot bigger than the usual smartphone bezels.
While most 2019 smartphones came with an in-screen fingerprint reader, the Galaxy Fold didn’t. Instead, Samsung opted for a side-mounted fingerprint reader. We don’t get a clear look at the right side of the device, but in the picture that shows the front display, you can make out what looks like a volume rocker and power button. The power button doesn’t stick out as much as the volume rocker, though. This is consistent with what another side-mounted fingerprint reader would look like—they are wide, flat, touch-sensitive areas that don’t stick out as much as a normal button. Of course, this is pre-release software, but we don’t see an in-screen fingerprint reader icon on the lock screen, which is another piece of evidence backing up the side-mounted fingerprint reader option. There’s also probably a number of conflicts with in-screen fingerprint reader technology and the flexible, bending, moving display.
The phone folds up into a cute little square, and we can see a display of some kind on the cover, along with two cameras and what would be the main, “rear” cameras when the phone is open. We can’t be totally sure how big the cover display is, but it looks small. In the picture, it shows only the time, date, and battery level, and there’s enough light bleed that you can sort of make out the shape of the display. The glow around the letters looks to be only as big as the camera array.
The 2020 Moto Razr wowed us with its fancy hinge design that allowed the phone to fold but didn’t crease the display. We don’t get a clear picture of the display mechanism here, but there are hints that it is similar to the Galaxy Fold. The closed picture shows the spine of the hinge peeking out of the top of the device, which is exactly what the Galaxy Fold hinge looks like. We can’t see a crease in any of the pictures, but the pictures are pretty blurry, and it’s possible that the phone just hasn’t been closed and creased at the time of these pictures.
Lastly we have the bottom, where we can see a bottom-firing speaker, a USB-C port, and no headphone jack. The Galaxy Fold was announced alongside the Galaxy S10, so maybe this foldable will get a similar announcement next to the Galaxy S11, which should be sometime in February.
Last week, we covered the launch of Slack Engineering’s open source mesh VPN system, Nebula. Today, we’re going to dive a little deeper into how you can set up your own Nebula private mesh network—along with a little more detail about why you might (or might not) want to.
VPN mesh versus traditional VPNs
The biggest selling point of Nebula is that it’s not “just” a VPN, it’s a distributed VPN mesh. A conventional VPN is much simpler than a mesh and uses a simple star topology: all clients connect to a server, and any additional routing is done manually on top of that. All VPN traffic has to flow through that central server, whether it makes sense in the grander scheme of things or not.
In sharp contrast, a mesh network understands the layout of all its member nodes and routes packets between them intelligently. If node A is right next to node Z, the mesh won’t arbitrarily route all of its traffic through node M in the middle—it’ll just send them from A to Z directly, without middlemen or unnecessary overhead. We can examine the differences with a network flow diagram demonstrating patterns in a small virtual private network.
All VPNs work in part by exploiting the bi-directional nature of network tunnels. Once a tunnel has been established—even through Network Address Translation (NAT)—it’s bidirectional, regardless of which side initially reached out. This is true for both mesh and conventional VPNs—if two machines on different networks punch tunnels outbound to a cloud server, the cloud server can then tie those two tunnels together, providing a link with two hops. As long as you’ve got that one public IP answering to VPN connection requests, you can get files from one network to another—even if both endpoints are behind NAT with no port forwarding configured.
Where Nebula becomes more efficient is when two Nebula-connected machines are closer to each other than they are to the central cloud server. When a Nebula node wants to connect to another Nebula node, it’ll query a central server—what Nebula calls a lighthouse—to ask where that node can be found. Once the location has been gotten from the lighthouse, the two nodes can work out between themselves what the best route to one another might be. Typically, they’ll be able to communicate with one another directly rather than going through the router—even if they’re behind NAT on two different networks, neither of which has port forwarding enabled.
By contrast, connections between any two PCs on a traditional VPN must pass through its central server—adding bandwidth to that server’s monthly allotment and potentially degrading both throughput and latency from peer to peer.
Direct connection through UDP skullduggery
Nebula can—in most cases—establish a tunnel directly between two different NATted networks, without the need to configure port forwarding on either side. This is a little brain-breaking—normally, you wouldn’t expect two machines behind NAT to be able to contact each other without an intermediary. But Nebula is a UDP-only protocol, and it’s willing to cheat to achieve its goals.
If both machines reach the lighthouse, the lighthouse knows the source UDP port for each side’s outbound connection. The lighthouse can then inform one node of the other’s source UDP port, and vice versa. By itself, this isn’t enough to make it back through the NAT pinhole—but if each side targets the other’s NAT pinhole and spoofs the lighthouse’s public IP address as being the source, their packets will make it through.
UDP is a stateless connection, and very few networks bother to check for and enforce boundary validation on UDP packets—so this source-address spoofing works, more often than not. However, some more advanced firewalls may check the headers on outbound packets and drop them if they have impossible source addresses.
If only one side has a boundary-validating firewall that drops spoofed outbound packets, you’re fine. But if both ends have boundary validation available, configured, and enabled, Nebula will either fail or be forced to fall back to routing through the lighthouse.
We specifically tested this and can confirm that a direct tunnel from one LAN to another across the Internet worked, with no port forwarding and no traffic routed through the lighthouse. We tested with one node behind an Ubuntu homebrew router, another behind a Netgear Nighthawk on the other side of town, and a lighthouse running on a Linode instance. Running iftop on the lighthouse showed no perceptible traffic, even though a 20Mbps iperf3 stream was cheerfully running between the two networks. So right now, in most cases, direct point-to-point connections using forged source IP addresses should work.
Setting Nebula up
To set up a Nebula mesh, you’ll need at least two nodes, one of which should be a lighthouse. Lighthouse nodes must have a public IP address—preferably, a static one. If you use a lighthouse behind a dynamic IP address, you’ll likely end up with some unavoidable frustration if and when that dynamic address updates.
The best lighthouse option is a cheap VM at the cloud provider of your choice. The $5/mo offerings at Linode or Digital Ocean are more than enough to handle the traffic and CPU levels you should expect, and it’s quick and easy to open an account and get one set up. We recommend the latest Ubuntu LTS release for your new lighthouse’s operating system; at press time that’s 18.04.
Nebula doesn’t actually have an installer; it’s just two bare command line tools in a tarball, regardless of your operating system. For that reason, we’re not going to give operating system specific instructions here: the commands and arguments are the same on Linux, MacOS, or Windows. Just download the appropriate tarball from the Nebula release page, open it up (Windows users will need 7zip for this), and dump the commands inside wherever you’d like them to be.
On Linux or MacOS systems, we recommend creating an
/opt/nebula folder for your Nebula commands, keys, and configs—if you don’t have an /opt yet, that’s okay, just create it, too. On Windows, C:Program FilesNebula is probably a more sensible location.
Certificate Authority configuration and key generation
The first thing you’ll need to do is create a Certificate Authority using the nebula-cert program. Nebula, thankfully, makes this a mind-bogglingly simple process:
[email protected]:/opt/nebula# ./nebula-cert ca -name "My Shiny Nebula Mesh Network"
What you’ve actually done is create a certificate and key for the entire network. Using that key, you can sign keys for each node itself. Unlike the CA certificate, node certificates need to have the Nebula IP address for each node baked into them when they’re created. So stop for a minute and think about what subnet you’d like to use for your Nebula mesh. It should be a private subnet—so it doesn’t conflict with any Internet resources you might need to use—and it should be an oddball one so that it won’t conflict with any LANs you happen to be on.
Nice, round numbers like 192.168.0.x, 192.168.1.x, 192.168.254.x, and 10.0.0.x should be right out, as the odds are extremely good you’ll stay at a hotel, friend’s house, etc that uses one of those subnets. We went with 192.168.98.x—but feel free to get more random than that. Your lighthouse will occupy .1 on whatever subnet you choose, and you will allocate new addresses for nodes as you create their keys. Let’s go ahead and set up keys for our lighthouse and nodes now:
[email protected]:/opt/nebula# ./nebula-cert sign -name "lighthouse" -ip "192.168.98.1/24" [email protected]:/opt/nebula# ./nebula-cert sign -name "banshee" -ip "192.168.98.2/24" [email protected]:/opt/nebula# ./nebula-cert sign -name "locutus" -ip "192.168.98.3/24"
Now that you’ve generated all your keys, consider getting them the heck out of your lighthouse, for security. You need the ca.key file only when actually signing new keys, not to run Nebula itself. Ideally, you should move ca.key out of your working directory entirely to a safe place—maybe even a safe place that isn’t connected to Nebula at all—and only restore it temporarily if and as you need it. Also note that the lighthouse itself doesn’t need to be the machine that runs nebula-cert—if you’re feeling paranoid, it’s even better practice to do CA stuff from a completely separate box and just copy the keys and certs out as you create them.
Each Nebula node does need a copy of ca.crt, the CA certificate. It also needs its own .key and .crt, matching the name you gave it above. You don’t need any other node’s key or certificate, though—the nodes can exchange them dynamically as needed—and for security best practice, you really shouldn’t keep all the .key and .crt files in one place. (If you lose one, you can always just generate another that uses the same name and Nebula IP address from your CA later.)
Configuring Nebula with config.yml
Nebula’s Github repo offers a sample config.yml with pretty much every option under the sun and lots of comments wrapped around them, and we absolutely recommend anyone interested poke through it see to all the things that can be done. However, if you just want to get things moving, it may be easier to start with a drastically simplified config that has nothing but what you need.
Lines that begin with a hashtag are commented out and not interpreted.
# # This is Ars Technica's sample Nebula config file. # pki: # every node needs a copy of the CA certificate, # and its own certificate and key, ONLY. # ca: /opt/nebula/ca.crt cert: /opt/nebula/lighthouse.crt key: /opt/nebula/lighthouse.key static_host_map: # how to find one or more lighthouse nodes # you do NOT need every node to be listed here! # # format "Nebula IP": ["public IP or hostname:port"] # "192.168.98.1": ["nebula.arstechnica.com:4242"] lighthouse: interval: 60 # if you're a lighthouse, say you're a lighthouse # am_lighthouse: true hosts: # If you're a lighthouse, this section should be EMPTY # or commented out. If you're NOT a lighthouse, list # lighthouse nodes here, one per line, in the following # format: # # - "192.168.98.1" listen: # 0.0.0.0 means "all interfaces," which is probably what you want # host: 0.0.0.0 port: 4242 # "punchy" basically means "send frequent keepalive packets" # so that your router won't expire and close your NAT tunnels. # punchy: true # "punch_back" allows the other node to try punching out to you, # if you're having trouble punching out to it. Useful for stubborn # networks with symmetric NAT, etc. # punch_back: true tun: # sensible defaults. don't monkey with these unless # you're CERTAIN you know what you're doing. # dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: logging: level: info format: text # you NEED this firewall section. # # Nebula has its own firewall in addition to anything # your system has in place, and it's all default deny. # # So if you don't specify some rules here, you'll drop # all traffic, and curse and wonder why you can't ping # one node from another. # firewall: conntrack: tcp_timeout: 120h udp_timeout: 3m default_timeout: 10m max_connections: 100000 # since everything is default deny, all rules you # actually SPECIFY here are allow rules. # outbound: - port: any proto: any host: any inbound: - port: any proto: any host: any
Warning: our CMS is mangling some of the whitespace in this code, so don’t try to copy and paste it directly. Instead, get working, guaranteed-whitespace-proper copies from Github: config.lighthouse.yaml and config.node.yaml.
There isn’t much different between lighthouse and normal node configs. If the node is not to be a lighthouse, just set
false, and uncomment (remove the leading hashtag from) the line
# - "192.168.98.1", which points the node to the lighthouse it should report to.
Note that the
lighthouse:hosts list uses the nebula IP of the lighthouse node, not its real-world public IP! The only place real-world IP addresses should show up is in the
Starting nebula on each node
I hope you Windows and Mac types weren’t expecting some sort of GUI—or an applet in the dock or system tray, or a preconfigured service or daemon—because you’re not getting one. Grab a terminal—a command prompt run as Administrator, for you Windows folks—and run nebula against its config file. Minimize the terminal/command prompt window after you run it.
[email protected]:/opt/nebula# ./nebula -config ./config.yml
That’s all you get. If you left the logging set at info the way we have it in our sample config files, you’ll see a bit of informational stuff scroll up as your nodes come online and begin figuring out how to contact one another.
If you’re a Linux or Mac user, you might also consider using the screen utility to hide nebula away from your normal console or terminal (and keep it from closing when that session terminates).
Figuring out how to get Nebula to start automatically is, unfortunately, an exercise we’ll need to leave for the user—it’s different from distro to distro on Linux (mostly depending on whether you’re using systemd or init). Advanced Windows users should look into running Nebula as a custom service, and Mac folks should call Senior Technology Editor Lee Hutchinson on his home phone and ask him for help directly.
Nebula is a pretty cool project. We love that it’s open source, that it uses the Noise platform for crypto, that it’s available on all three major desktop platforms, and that it’s easy…ish to set up and use.
With that said, Nebula in its current form is really not for people afraid to get their hands dirty on the command line—not just once, but always. We have a feeling that some real UI and service scaffolding will show up eventually—but until it does, as compelling as it is, it’s not ready for “normal users.”
Right now, Nebula’s probably best used by sysadmins and hobbyists who are determined to take advantage of its dynamic routing and don’t mind the extremely visible nuts and bolts and lack of anything even faintly like a friendly interface. We definitely don’t recommend it in its current form to “normal users”—whether that means yourself or somebody you need to support.
Unless you really, really need that dynamic point-to-point routing, a more conventional VPN like WireGuard is almost certainly a better bet for the moment.
- Free and open source software, released under the MIT license
- Cross platform—looks and operates exactly the same on Windows, Mac, and Linux
- Reasonably fast—our Ryzen 7 3700X managed 1.7Gbps from itself to one of its own VMs across Nebula
- Point-to-point tunneling means near-zero bandwidth needed at lighthouses
- Dynamic routing opens interesting possibilities for portable systems
- Simple, accessible logging makes Nebula troubleshooting a bit easier than WireGuard troubleshooting
- No Android or iOS support yet
- No service/daemon wrapper included
- No UI, launcher, applet, etc
- Did we mention the complete lack of scaffolding? Please don’t ask non-technical people to use this yet
- The Windows port requires the OpenVPN project’s tap-windows6 driver—which is, unfortunately, notoriously buggy and cantankerous
- “Reasonably fast” is relative—most PCs should saturate gigabit links easily enough, but WireGuard is at least twice as fast as Nebula on Linux
This week, game developer and publisher Blizzard Entertainment emailed customers who had pre-ordered its upcoming 4K remaster of strategy classic Warcraft III announcing that the game will become playable at 3am PST on January 28, 2020.
The game was previously slated for launch before the end of 2019, but players had begun to suspect some kind of delay when Blizzard didn’t provide a firm release date for the game at its otherwise packed BlizzCon conference in early November. The company explained the short delay in a blog post, writing:
Though we’ve been working hard to get Reforged in your hands before the end of the year, as we started approaching the finish line, we felt we’d need a little extra development time for finishing touches. As always, our goal is to honor the high standards you hold us to.
As recent controversies over crunch in game development have made clear, making triple-A video games is complex and fraught with unexpected roadblocks and ever-shifting scope. Delays like this are common. Some excited players may be frustrated that they have to wait a little longer, but others will be happy to see a focus on quality or saner worker conditions for developers, whichever (if either) the case may be here.
Warcraft III was arguably one of the most influential games of the past 20 years because it birthed both World of Warcraft (which defined more than a decade of massively multiplayer online games) and, via community-made modifications, the MOBA genre (League of Legends, Dota 2, Blizzard’s own Heroes of the Storm), which in tandem with battle royale games, drove the esports and livestreaming revolution of the gaming industry.
It’s also a very well-polished real-time strategy game that includes many of the lore underpinnings for the popular Warcraft franchise and universe. The remaster overhauls the graphics, adds a new online matchmaking infrastructure, makes game balance changes and other modernizations, and more.
Warcraft III: Reforged costs $29.99 for its standard edition, or $39.99 for the “Spoils of War” edition that includes additional hero skins for the game, as well as other Warcraft III-related cosmetics in other Blizzard games like Overwatch and World of Warcraft. It is available exclusively as a digital download via Blizzard’s online software storefront.
Apple, Google, Amazon, and the Zigbee Alliance have all teamed up to make a new smart home standard. The new working group went live today under the name of “Project Connected Home over IP” with announcement blog posts from Google, Apple, Zigbee, and a new website, connectedhomeip.com. The name doesn’t sound too catchy until you realize “Connected Home over IP” abbreviates to “CHIP” which, across all these blog posts, is quietly used only a single time in the official FAQ.
According to the new website, “The goal of the Connected Home over IP project is to simplify development for manufacturers and increase compatibility for consumers.” But thanks to XKCD, we all know that one new standard to “unite them all” often just results in making one additional standard available, but assuming the companies involved actually support their own standard, this could—maybe—make things easier for consumers.
Currently, Apple’s smart home ecosystem is HomeKit, and it works over IP (usually Wi-Fi) and Bluetooth LE. Amazon has the “Works with Alexa” program, and while Echos can handle IP connections, the smart home focused models also have Zigbee (an IEEE 802.15.4-based low-power, low-data-rate mesh network) built into them. Google being Google means it has several overlapping and competing smart home ecosystems at various stages of adoption. The company is working on shutting down the “Works with Nest” ecosystem in favor of the “Works with Google Assistant” ecosystem, which is IP-based. Google’s Nest division has also cooked up the “Thread” network protocol, which, just like Zigbee, is an IEEE 802.15.4-based low-power, low-data-rate mesh network. While Thread can be radio compatible with Zigbee’s ~900MHz or 2.45Ghz signal, Thread adds the ability to be wrapped in an IP packet and travel over Wi-Fi or the Internet. Nest also has the “Weave” communication standard, which defines how to send a message like “turn on the light” over the Thread or Wi-Fi network.
In addition to Apple, Google, Amazon, and Zigbee, the site says that “Zigbee Alliance board-member companies IKEA, Legrand, NXP Semiconductors, Resideo, Samsung SmartThings, Schneider Electric, Signify (formerly Philips Lighting), Silicon Labs, Somfy, and Wulian are also on board to join the Working Group and contribute to the project.”
It’s impossible to know how truly committed each company is to Project CHIP at this stage, but the promised goal of building devices that are “compatible with smart home and voice services such as Amazon’s Alexa, Apple’s Siri, Google’s Assistant, and others” sounds great. Compatibility with the major voice-command systems is a primary concern for any new smart home product, and being able to tackle the big three with a single standard sounds a lot easier than implementing three separate APIs.
Project CHIP is open source. The site says “The reference implementation of the new standard, and its supporting tooling, will be developed and maintained on the GitHub open source platform for all aspects of the specification. Please stay tuned for more information.” The website also says the new standard will be “royalty-free,” which is not currently the case for Zigbee or Apple’s HomeKit.
iFixit, a group that sells electronics repair tools and rates devices for repairability, published a detailed teardown of Apple’s new Mac Pro. Despite a couple of minor complaints, the folks at iFixit gave the device high marks. In an unusual tune for Apple products, they called the Mac Pro “beautiful, amazingly well put together, and a masterclass in repairability.”
Whereas a modern Mac usually takes specialized tools and a lot of careful effort to open up, iFixit was able to get inside the Mac Pro by simply using the twist handle at the top—no proprietary screws or adhesive were in place. Additionally, removing the case hard-cuts power to the machine for safe operation.
Both the CPU and RAM, as well as PCIe cards, can be accessed and replaced as easily as could be done on most other desktop PCs. However, the SSD is a different story. It has a modular SSD, but it’s “bound to the T2 chip, meaning user-replacements are a no-go.” You can add more storage in other places, but you can’t really replace the built-in drive.
No special tools are required for replacing the RAM. Access to the CPU is similar to other desktop tower PCs; you’ll find it in a standard socket after you unscrew and remove its heatsink, and you can remove it and replace it if needed.
As a side note, iFixit tried grating cheese on the surface in reference to many jokes about the Mac Pro’s appearance, but unsurprisingly found that it could not efficiently be used for that purpose.
iFixit complimented the build quality over all, and noted a handful of conveniences in the design that make it clear that the machine was designed to be opened up and serviced. Whereas they often give Macs scores like 1 out of 10 for repairability, they gave the Mac Pro 9 out of 10, knocking it only for the lack of a replaceable SSD and difficulty and expense of finding new parts.
Apple’s Mac Pro is built specifically for use in professional environments like video-editing bays, 3D-modeling studios, and the like. It’s priced to be competitive with high-end workstations from specialized companies like Boxx or from dedicated arms of bigger PC OEMs. It’s also largely manufactured in the United States instead of in China or India, further raising the price.
The narrow targeting of the Mac Pro makes sense for Apple’s current strategy with the Mac, but there is still a dedicated niche of Mac users who want this level of serviceability in a more consumer-oriented (and consumer-priced) desktop. Unfortunately, that’s still not yet to be, but any version of a user-serviceable Mac tower in 2019 is nevertheless interesting.
Listing image by iFixit
Update: Google has released an official statement:
The M79 update to Chrome and WebView on Android devices was suspended after detecting an issue in WebView where some users’ app data was not visible within those apps. This app data was not lost and will be made visible in apps when we deliver an update this week. We apologize for any inconvenience.
There’s also a blog post, which says the fix is now rolling out now for Android on the stable channel. The updated version is Chrome 79.0.3945.93.
Original Story: Google’s latest Chrome update is causing a headache for users and developers of some Android apps. Chrome 79, which is rolling out across desktop and mobile OSes, has been causing data loss for some other seemingly unrelated Android apps. Thanks to this bug, specifically on Android, updating your browser can now do something like wipe out the data in your Finance app.
The connection between Chrome and Android app data might not be obvious, but Chrome on Android isn’t always just the browser that starts up when you press on the Chrome icon. For some versions of Android, the Chrome app can also provide the built-in HTML render for the entire OS. Apps can call on the system render to display in-app Web content (the API is called “WebView”), and, in this case, an instance of Chrome would seamlessly start up and draw HTML content inside your app. Whether you want to call them “HTML apps” or “Web wrappers,” it’s not uncommon for apps to basically be only a WebView. These apps just turn on and load a webpage, and the Web wrapper provides native Android features like an app drawer icon, full-screen interface, and Play Store distribution. These apps look and work mostly like native apps from a user’s perspective, and it’s hard for even technical users to tell the difference. Cross-platform development is a lot easier when you use HTML, though, since HTML code works everywhere.
The data loss happened because Google changed where Chrome 79 stores profile data without entirely migrating the old data. WebView-based apps can store data through APIs like localstorage, WebSQL, and indexedDB, and apps that use these APIs will suddenly stop finding their data after the user upgrades to Chrome 79. The data isn’t deleted, it’s just misplaced. But to a user, there’s no difference. Their favorite HTML-based app will reset itself to a freshly installed state, and their data won’t be visible. By default, app updates on Android happen automatically and without user interaction, so for most users, their app will just wipe itself out and they won’t know why.
If you want to get technical (and, of course, we do), only Android versions 7, 8, and 9 use the “Chrome” app for the system HTML render. Google has long supported the reasonable stance of updating the Android system HTML-renderer through the Play Store, but it has flip-flopped back and forth on whether using the user-facing Chrome app for system HTML duties is a good idea. Android 5 and 6 used a separate package called “Android System WebView” for Web content, which is very close to Chrome, as it’s based on the Chromium codebase. After three versions of using Chrome, Google went back to the purpose-built Android System WebView app for the latest version, Android 10, saying that the Android System WebView should have “fewer weird special cases and bugs.” In this case it wouldn’t have helped, since both Chrome 79 and Android System WebView 79 have this data-loss bug. The point is that the offending system Web renderer app will change depending on your Android version.
For a couple of years now, Apple has been exploring subscriptions as a way to bolster revenue in the face of slowing iPhone growth. This year saw a turning point in that strategy, with Apple TV+, Apple News+, and Apple Arcade joining the company’s suite of subscription services that already included Apple Music, AppleCare, and iCloud.
But as this is a relatively new frontier for the company (at least in terms of emphasis), Apple is still testing the waters of different approaches. The latest of these is the introduction of a discounted annual subscription to Apple Arcade priced at $49.99, about a $10 savings compared to the $4.99 monthly subscription that was introduced in September.
Apple Arcade offers subscribers Netflix-style access to around a hundred games on iPhone, iPad, Mac, and Apple TV. While many games have flown under the radar or not made much public impact, a few such as Sayonara Wild Hearts, Grindstone, What the Golf?, and Where Cards Fall have received rave reviews from consumers and critics or found significant financial success through the service.
Arcade is the culmination of an effort that Apple has made over the past couple of years to address the discoverability problem for games in the App Store for the company’s devices. The iPhone App Store has many gems, but they have historically been difficult to find or surface amidst a sea of poorly made titles or of gambling-like titles with exploitative mechanics and monetization schemes.
Apple first began emphasizing human-curation from the App Store with iOS 12, but Apple Arcade arrived with iOS 13 in September to make it more attractive still for consumers to find and play premium-quality mobile games. It also followed an effort by Apple to evangelize developers into offering their own individual app subscriptions, of which Apple would get a cut.
Apple has also experimented with subscription bundling by giving students who subscribe to Apple Music access to Apple TV+ (reports indicate the company is hoping to introduce an Amazon Prime-like bundle in the future, too) and by offering an indefinitely renewing monthly AppleCare+ subscription as an alternative to its previously (and still) offered two and three-year AppleCare packages.
After shipping the big Android 10 update for international versions of the Galaxy S10 earlier this month, Samsung and carriers are now starting to roll out Android 10 for the US and Canadian users. SamMobile reports US carriers T-Mobile and Sprint have started shipping the update, and there are reports of unlocked Galaxy S10s getting the update, too.
Last year, US variants of the Galaxy S9 had to wait 40 days longer than the international versions for their major Android update, and unlocked versions had to wait 55 days. This year, Samsung has cut the delay down to 18 days.
One of the ways Samsung makes Android updates difficult on itself is that there are usually two versions of its major flagships. The Galaxy S10 in most European and Asian countries are made with Samsung’s own Exynos SoC, while devices in the Americas and China get Qualcomm’s Snapdragon SoC. Snapdragon and Exynos Galaxy S10s might look the same on the outside, but on the inside, they are built around totally different chips, so Samsung has close to double the development and testing work needed to roll out Android 10 for a single device. In the US, Samsung also gives carriers a say as to when an update rolls out, which is why we haven’t seen AT&T and Verizon units get updated yet.
No matter what timeframe you use, Samsung has made a big improvement this year compared to last year. In the US, it only took the company three-and-a-half months to ship Android 10 to the Galaxy S10, while last year, Samsung took six months to ship Android 9 to the Galaxy S9. Google has been easing the work needed to update Android with Project Treble, which makes the OS more modular, and we’ve seen across-the-board update improvements as a result.
Of course, we are grading Samsung on a curve, here. Samsung devices still have the worst update time compared to any other smartphone company with a pulse. Apple and Google users get OS updates on day 1, and Google claims it is going to start quarterly feature updates for its Pixel line. OnePlus took only 18 days to ship Android 10 to its flagship device, and its devices cost hundreds less than the equivalents from Samsung.
If you have Samsung’s other big flagship, the Galaxy Note 10, know that Android 10 is on the way for that device, too. SamMobile reports that German Galaxy Note 10 users have recently started to see the update, and just like with the Galaxy S10, we can expect the update to slowly trickle out across the world’s countries and carriers over the next month.
Update: According to numerous reports in the comments and elsewhere, Verizon has gotten the update too. Thanks, everyone.
There are more ultra-mobile professionals now than ever before, which is why OEMs are developing increasingly thin-and-light laptops that will appeal to those users. No one wants to add heft to their bag, regardless of whether they’re going off on a 10-hour flight or a 10-minute commute to work, thus increasing the appeal of thin-and-light laptops. But the most mobile among us will only go as thin and light as our performance needs allow us to—if a laptop isn’t powerful or efficient enough to help you get work done, its svelte characteristics won’t make up for that.
Enter the HP Elite Dragonfly two-in-one laptop, which is HP’s answer to this problem. It’s an ultra-slim laptop with a MIL-spec-tested design that weighs just 2.18 pounds, and it has the power and security features of one of HP’s Elite series laptops. HP is betting on the idea that professionals will choose the thinnest and lightest laptop possible that doesn’t compromise the performance or battery life they need to get things done regardless of their location—and that they’ll pay top dollar to get it. We spent a few days with the Elite Dragonfly convertible to see how well-designed it actually is and to see if taking thin and light to the extreme hinders any necessities.
Look and feel
|Specs at a glance: HP Elite Dragonfly two-in-one laptop|
|Screen||13.3-inch FHD (1920×1080) touchscreen||13.3-inch FHD (1920×1080) touchscreen||13.3-inch 4K (3840×2160) touchscreen|
|OS||Windows 10 Home||Windows 10 Home||Windows Pro 64|
|CPU||Core i7-8665U||Intel Core i5-8265U||Core i7-8665U w/ vPro|
|HDD||512GB PCIe SSD + 32GB Optane Memory||256GB PCIe SSD||512GB PCIe SSD + 32GB Optane Memory|
|GPU||Intel UHD Graphics 620|
|Networking||Intel AX200 Wi-Fi 5 (2×2), Bluetooth 4.2|
|Ports||2 x Thunderbolt 3, 1 x USB-A, 1 x HDMI, 1 x nano SIM, 1 x lock slot, 1 x 3.5mm headphone jack|
|Size||11.98×7.78×0.63 inches (304×198×16mm)|
|Weight||2.5 pounds (40 ounces)||2.18 pounds (34.0 ounces)||2.5 pounds (40 ounces)|
|Battery||56.2Wh battery||38Wh battery||56.2Wh battery|
|Extras||Fingerprint reader, IR camera, optional vPro, optional LTE, TPM 2.0, absolute persistence module, power-on authentication, HP DriveLock and Automatic DriveLock, HP Sure Click, HP Secure Erase, HP Sure Start, HP Sure Run, HP Sure Recovery, HP Sure Sense, HP BIOSphere|
|Price||$2,169||$1,549 (available at this price point soon)||$2,369|
HP Elite Dragonfly laptop
Design and durability
Being part of the Elite family, the Elite Dragonfly laptop had to adhere to certain durability and performance standards that users are accustomed to from that line. We’ll get to the performance chops in a bit, but from a design perspective, the Elite Dragonfly surprised me.
Typically, laptops that pride themselves on being thin and light above all else tend to feel only a bit more durable than a precious heirloom that your kids aren’t allowed to touch. The Elite Dragonfly feels sturdier than most laptops that I’ve used that share the same size and weight class. Measuring 16mm thick and weighing between 2.18 and 2.5 pounds, it’s an easy backpack companion, and I was happy to see that neither its chassis nor its lid flexed at all when put under pressure.
That may be due to the extra layers of magnesium used in the laptop’s keyboard area, cover, and bottom portions that reinforce its design, along with other internal structures that keep it steady and durable. The Dragonfly passed 19 MIL-STD-810G tests, and HP placed particular emphasis on testing the machine for drops, shocks, and vibrations.
Many flagship laptops aren’t MIL-spec tested, or they only pass a limited number of tests, but this type of testing is standard for HP’s Elite line. While these are not “rugged” laptops by any means, adding this level of MIL-spec durability means that you can accidentally drop this or leave it in a precarious situation and it will most likely return to you unharmed.
The Elite Dragonfly gets its name from the “dragonfly blue” finish that coats its chassis. For now, the machine is only available in this color, which will inevitably make some users scowl. It’s most similar to navy blue but with a pleasant brightness that subtly comes through whenever light hits it. Whether the blue finish speaks to you or not, it’s a welcome change from the sea of silver, barrage of black, and plethora of pink consumer electronics that dominate the market now. The entire chassis is made from magnesium, but HP also integrated post-consumer recycled plastic (including some ocean-bound plastic material) into the speaker box. That’s not something that users will be able to see with their eyes, but it’s an effort on HP’s part to be a bit more eco-friendly.
A bug in iOS 13.3 allows children to easily circumvent the restrictions their parents or guardians set with the Communications Limit feature in Screen Time. Apple has said it plans to fix the problem in a future software update.
The iOS 13.3 update released earlier this week added the ability for parents to whitelist contacts for their kids to communicate with. Kids need the parents to input a passcode to talk to anyone not on the list, with an exception made for emergency services like 911. It was the flagship feature of the update.
Yesterday, CNBC published a report detailing a bug that allowed kids to easily circumvent the restrictions. It turns out that when contacts are not set to sync with iCloud by default, texts or calls from unknown numbers present children with the option to add the number as a new contact. Once that step has been taken, they can communicate freely with the contact.
Further, kids with access to an Apple Watch can ask Siri on the Watch to text or call a number on the paired iPhone, regardless of whether the number is whitelisted or not. CNBC notes that this does not work when Downtime, another parental control feature, is enabled, however.
Apple offered the following statement to CNBC as news of the issue spread:
This issue only occurs on devices set up with a non-standard configuration, and a workaround is available. We’re working on a complete fix and will release it in an upcoming software update.
Apple has faced a generally rocky launch with iOS 13 and its numerous subsequent smaller updates, such that the company has made plans to change how it tests and builds new software internally to avoid future problems. Our earlier report on that noted that sources close to Apple said the company had been happier with its software since the release of iOS 13.2. However, iOS 13.2 had a widespread memory issue affecting background apps, and iOS 13.3 now has this new ScreenTime issue, meaning the two recent feature updates each came with a major bug.
Apple has released more bug fix updates since the iOS 13 launch than is usual after an annual update.
The Pixelbook Go is Google’s latest swing at a high-end, flagship laptop for Chrome OS. The device was announced and shipped in October, but only the mid- and low-end models. As first spotted by Chrome Unboxed, the laptop’s highest-end configuration is now available for $1,399.
The new $1,300 model upgrades the Pixelbook Go to a 13.3-inch, 3840×2160 (331ppi) touchscreen LCD, an Intel Core i7-8500Y, 256GB of eMMC storage, and a 56WH battery. Like the $999 version, you get 16GB of RAM.
Those specs are super-overkill for running a Web browser, but remember, you can run Android and Linux apps on Chromebooks now, so maybe you’ll find a way to make use of them. If you’re really concerned about non-Web-browser tasks though, for $1,300, you might just want to buy a regular Windows, Mac, or Linux laptop and not deal with the restrictions of Chrome OS. The Pixelbook Go has a nice keyboard and a grippy bottom design, but it mostly earned a firm rating of “average” in our review. There is little that makes it stand out from the competition.
For now, the 4K version of the Pixelbook Go is only available in Black from Google, Amazon, and Best Buy. Google lists the Pink version as “coming soon.”
In an earlier deep learning article, we talked about how inference workloads—the use of already-trained neural networks to analyze data—can run on fairly cheap hardware, but running the training workload that the neural network “learns” on is orders of magnitude more expensive.
In particular, the more potential inputs you have to an algorithm, the more out of control your scaling problem gets when analyzing its problem space. This is where MACH, a research project authored by Rice University’s Tharun Medini and Anshumali Shrivastava, comes in. MACH is an acronym for Merged Average Classifiers via Hashing, and according to lead researcher Shrivastava, “[its] training times are about 7-10 times faster, and… memory footprints are 2-4 times smaller” than those of previous large-scale deep learning techniques.
In describing the scale of extreme classification problems, Medini refers to online shopping search queries, noting that “there are easily more than 100 million products online.” This is, if anything, conservative—one data company claimed Amazon US alone sold 606 million separate products, with the entire company offering more than three billion products worldwide. Another company reckons the US product count at 353 million. Medini continues, “a neural network that takes search input and predicts from 100 million outputs, or products, will typically end up with about 2,000 parameters per product. So you multiply those, and the final layer of the neural network is 200 billion parameters … [and] I’m talking about a very, very dead simple neural network model.”
At this scale, a supercomputer would likely need terabytes of working memory just to store the model. The memory problem gets even worse when you bring GPUs into the picture. GPUs can process neural network workloads orders of magnitude faster than general purpose CPUs can, but each GPU has a relatively small amount of RAM—even the most expensive Nvidia Tesla GPUs only have 32GB of RAM. Medini says, “training such a model is prohibitive due to massive inter-GPU communication.”
Instead of training on the entire 100 million outcomes—product purchases, in this example—Mach divides them into three “buckets,” each containing 33.3 million randomly selected outcomes. Now, MACH creates another “world,” and in that world, the 100 million outcomes are again randomly sorted into three buckets. Crucially, the random sorting is separate in World One and World Two—they each have the same 100 million outcomes, but their random distribution into buckets is different for each world.
With each world instantiated, a search is fed to both a “world one” classifier and a “world two” classifier, with only three possible outcomes apiece. “What is this person thinking about?” asks Shrivastava. “The most probable class is something that is common between these two buckets.”
At this point, there are nine possible outcomes—three buckets in World One times three buckets in World Two. But MACH only needed to create six classes—World One’s three buckets plus World Two’s three buckets—to model that nine-outcome search space. This advantage improves as more “worlds” are created; a three-world approach produces 27 outcomes from only nine created classes, a four-world setup gives 81 outcomes from 12 classes, and so forth. “I am paying a cost linearly, and I am getting an exponential improvement,” Shrivastava says.
Better yet, MACH lends itself better to distributed computing on smaller individual instances. The worlds “don’t even have to talk to one another,” Medini says. “In principle, you could train each [world] on a single GPU, which is something you could never do with a non-independent approach.” In the real world, the researchers applied MACH to a 49 million product Amazon training database, randomly sorting it into 10,000 buckets in each of 32 separate worlds. That reduced the required parameters in the model more than an order of magnitude—and according to Medini, training the model required both less time and less memory than some of the best reported training times on models with comparable parameters.
Of course, this wouldn’t be an Ars article on deep learning if we didn’t close it out with a cynical reminder about unintended consequences. The unspoken reality is that the neural network isn’t actually learning to show shoppers what they asked for. Instead, it’s learning how to turn queries into purchases. The neural network doesn’t know or care what the human was actually searching for; it just has an idea what that human is most likely to buy—and without sufficient oversight, systems trained to increase outcome probabilities this way can end up suggesting baby products to women who’ve suffered miscarriages, or worse.
The last installment in our five-part holiday gift guide series this year is tailored for power users—those who know their way around technology and feel uneasy settling for gear that doesn’t provide high performance.
The nine gadgets we’ve rounded up below may be overkill for most of the people in your life, but they should satisfy those who consider themselves enthusiasts in some way. Per usual, we’ve curated these recommendations based on hands-on testing we’ve done over the course of 2019. If none of these items fit your shopping list’s needs, though, take a look at our previous gift guides for the home, the office, the road, and affordable gadgets for additional inspiration. For now, though, let’s indulge a little in the latest and greatest tech.
Note: Ars Technica may earn compensation for sales from links on this post through affiliate programs.
The Razer Viper is marketed as a competitive gaming mouse, and it works well for that purpose. But it’s excellent for everyday use as well. The main draw here is lightness: at just 69 grams, the Viper is a breeze to slide around. It has a flatter shape and might appeal more to “claw” grippers than its peers, but it’s contoured gently on the top and sides, with a slightly flared-out bottom that gives room for your palm to rest. Everyone has their preferences when it comes to mice design, but something this light and uncomplicated should present little fatigue over the course of the day.
Beyond that, the Viper’s RGB lighting is limited to subtle changes on the Razer logo, so it doesn’t come off as gaudy the way other gaming mice do. The main right and left buttons feel quick and crisp, due in part to an optical switch design that makes double-clicks rare. They should also keep the Viper more durable over time. (For what it’s worth, we’ve tested the mouse for four months and have encountered no reliability issues thus far.) The scroll wheel is a bit on the slower side but still comes off smooth. The optical sensor has up to a 16,000 DPI resolution, which is overkill, but it tracks smoothly and consistently across surfaces all the same. Razer’s companion software is far from required to get the Viper working, but it’s unobtrusive enough, and it can be used to fine tune DPI presets and adjust more granular settings like lift sensitivity. The cable is exceedingly light and flexible. And the whole design is ambidextrous, so lefties aren’t left out in the dark.
There are things to nitpick about the Viper. The side buttons are consistent but sit fairly flush against the side of the mouse. The hard rubberized texture on those sides isn’t quite as grippy as it could be. And while we like the DPI adjustment button being on the bottom of the device, since it makes accidental presses less likely, others may prefer it being more readily accessible on top. It’s not the cheapest mouse, either. But as a gift, the Viper is highly comfortable and performant for power users of all kinds.
SanDisk Extreme Portable SSD
While there are plenty of storage solutions for your home or office data, SanDisk’s Extreme Portable SSD is a good option for data you need with you wherever you go. The surprisingly small, portable SSD is IP55-rated, so it will withstand water and dust, as well as shock and vibrations. It can even be dropped from up to two meters without suffering any damage.
That’s impressive for an SSD that can fit comfortably into the palm of your hand. While it has one USB-C port on it, it comes with both USB-C and USB-A cables so it can be connected to almost any PC. All of this combined makes it one of the easiest SSDs to travel with, and one of the most convenient to use for most people.
Available in 250GB, 500GB, 1TB, and 2TB options, the SanDisk Extreme SSD is also one of the fastest portable storage solutions that we’ve tested. It’s similar to Samsung’s T5 SSD in data read and write speeds, making it one of the fastest you can get that also has a truly portable (and durable) design. Samsung’s T5 SSD is more affordable, but SanDisk’s Extreme SSD is better for power users because it has that extra layer of protection in its design along with the same ease of use as Samsung’s device.
SanDisk Extreme Portable SSD (1TB)
Listing image by Jeff Dunn
Last month, the engineering department at Slack—an instant messaging platform commonly used for community and small business organization—released a new distributed VPN mesh tool called Nebula. Nebula is free and open source software, available under the MIT license.
It’s difficult to coherently explain Nebula in a nutshell. According to the people on Slack’s engineering team, they asked themselves “what is the easiest way to securely connect tens of thousands of computers, hosted at multiple cloud service providers in dozens of locations around the globe?” And (developing) Nebula was the best answer they had. It’s a portable, scalable overlay networking tool that runs on most major platforms, including Linux, MacOS, and Windows, with some mobile device support planned for the near future.
Nebula-transmitted data is fully encrypted using the Noise protocol framework, which is also used in modern, highly security-focused projects such as Signal and WireGuard. Unlike more traditional VPN technologies—including WireGuard—Nebula automatically and dynamically discovers available routes between nodes and sends traffic down the most efficient path between any two nodes rather than forcing everything through a central distribution point.
Getting started with Nebula isn’t too difficult, although the documentation is a little sparse. The Github repository for Nebula makes binaries for Windows, Linux, and MacOS available, along with a sample configuration file. The config files consists of pki, static_host_map, lighthouse, listen, tun, and firewall sections.
Nebula’s term for a publicly-available node in the network is a “lighthouse.” Lighthouse nodes should be available over underlying network connections without Nebula up and running, and they represent a point of entry for new nodes joining the network. But once a node has joined the network, it won’t need to route all its traffic through a lighthouse—nodes will automatically discover the most efficient path between themselves, and if that doesn’t include a lighthouse node in the middle, that’s fine. Even if traffic needs to punch across NAT “the wrong way,” there’s no problem—because each node behind NAT will make and keep open tunnels to all nodes it knows about.
We tested this by connecting four nodes in a small Nebula network: a lighthouse node hosted at Digital Ocean—which we creatively named lighthouse—and three member nodes in a small office. Two of our member nodes (nat0 and nat1) are on the main LAN in the office, and the third member node, doublenat, is on a separate subnet, connected behind node nat0.
It didn’t take much testing to confirm that Nebula’s promise to automatically discover the best route works as advertised. When running an iperf3 network speed test from nat1 to doublenat, we got 674Mbps throughput, making it painfully clear that packets aren’t being routed through lighthouse, which is in Digital Ocean’s New York datacenter several hundred miles away. Instead, doublenat has punched a tunnel outward through the NAT (Network Address Translation) layer directly to nat1, and the two hosts can use that established tunnel to communicate directly.
We can already hear some of you clamoring, “can I use this to escape obnoxious networks with overbearing firewalls?” and the answer—sorry!—is “probably not.” Just like WireGuard, Nebula operates over UDP only—so overzealous firewalls that won’t allow WireGuard connections won’t allow Nebula either. This also sharply limits its value as an exfiltration tool, since a big wash of outbound traffic on an arbitrary UDP port will stick out like a sore thumb to any network analysis tools, even if the firewall allows it.
We think that the greatest potential value to using Nebula over a more conventional VPN tool like WireGuard is its ability to discover the most efficient routes wherever it happens to be. If you have Nebula running on your laptop, your home PC, and a Digital Ocean droplet, the laptop will communicate with the PC at LAN speeds when it’s at home and Internet speeds when it’s on the road.
Listing image by NSSDC Photo Gallery
Google’s password checking feature has slowly been spreading across the Google ecosystem this past year. It started as the “Password Checkup” extension for desktop versions of Chrome, which would audit individual passwords when you entered them, and several months later it was integrated into every Google account as an on-demand audit you can run on all your saved passwords. Now, instead of a Chrome extension, Password Checkup is being integrated into the desktop and mobile versions of Chrome 79.
All of these Password Checkup features work for people who have their username and password combos saved in Chrome and have them synced to Google’s servers. Google figures that since it has a big (encrypted) database of all your passwords, it might as well compare them against a 4-billion-strong public list of compromised usernames and passwords that have been exposed in innumerable security breaches over the years. Any time Google hits a match, it notifies you that a specific set of credentials is public and unsafe and that you should probably change the password.
The whole point of this is security, so Google is doing all of this by comparing your encrypted credentials with an encrypted list of compromised credentials. Chrome first sends an encrypted, 3-byte hash of your username to Google, where it is compared to Google’s list of compromised usernames. If there’s a match, your local computer is sent a database of every potentially matching username and password in the bad credentials list, encrypted with a key from Google. You then get a copy of your passwords encrypted with two keys—one is your usual private key, and the other is the same key used for Google’s bad credentials list. On your local computer, Password Checkup removes the only key it is able to decrypt, your private key, leaving your Google-key-encrypted username and password, which can be compared to the Google-key-encrypted database of bad credentials. Google says this technique, called “private set intersection,” means you don’t get to see Google’s list of bad credentials, and Google doesn’t get to learn your credentials, but the two can be compared for matches.
Building Password Checkup into Chrome should make password auditing more mainstream. Only the most security-conscious people would seek out and install the Chrome extension or perform the full password audit at passwords.google.com, and these people probably have better password hygiene to begin with. Building the feature into Chrome will put it in front of more mainstream users who don’t usually consider password security, which are exactly the kind of people who need this sort of thing. This is also the first time password checkup has been available on mobile, since mobile Chrome still doesn’t support extensions (Google plz).
Google says, “For now, we’re gradually rolling this out for everyone signed in to Chrome as a part of our Safe Browsing protections.” Users can control the feature in the “Sync and Google Services” section of Chrome Settings, and if you’re not signed into Chrome, and not syncing your data with Google’s servers, the feature won’t work.
With Password Checkup being integrated into Chrome, the extension is not really useful anymore. The Web version is still great as a full password audit for all your passwords stored by Google, and now the version built into Chrome will continually check your passwords as you enter them.