Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.

The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.

But that’s not all

It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.

Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.

“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.

In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.

Patch on the way

The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.

In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:

We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.

It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.

As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.

Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.

This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.

Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.

The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.

Heroic measures

Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.

A modified USB cable attached to the back of an X4 watch.

Enlarge / A modified USB cable attached to the back of an X4 watch.
Mnemonic

One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.

Sand’s suspicions were further aroused when he found intents with the following names:

  • WIRETAP_INCOMING
  • WIRETAP_BY_CALL_BACK
  • COMMAND_LOG_UPLOAD
  • REMOTE_SNAPSHOT
  • SEND_SMS_LOCATION

After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.

“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”

Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.

As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.

Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.

The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.

Apple’s T2 security chip has an unfixable flaw

2014 Mac mini and 2012 Mac mini

Enlarge / The 2014 Mac mini is pictured here alongside the 2012 Mac mini. They looked the same, but the insides were different in some key—and disappointing—ways.

A recently released tool is letting anyone exploit an unusual Mac vulnerability to bypass Apple’s trusted T2 security chip and gain deep system access. The flaw is one researchers have also been using for more than a year to jailbreak older models of iPhones. But the fact that the T2 chip is vulnerable in the same way creates a new host of potential threats. Worst of all, while Apple may be able to slow down potential hackers, the flaw is ultimately unfixable in every Mac that has a T2 inside.

In general, the jailbreak community hasn’t paid as much attention to macOS and OS X as it has iOS, because they don’t have the same restrictions and walled gardens that are built into Apple’s mobile ecosystem. But the T2 chip, launched in 2017, created some limitations and mysteries. Apple added the chip as a trusted mechanism for securing high-value features like encrypted data storage, Touch ID, and Activation Lock, which works with Apple’s “Find My” services. But the T2 also contains a vulnerability, known as Checkm8, that jailbreakers have already been exploiting in Apple’s A5 through A11 (2011 to 2017) mobile chipsets. Now Checkra1n, the same group that developed the tool for iOS, has released support for T2 bypass.

On Macs, the jailbreak allows researchers to probe the T2 chip and explore its security features. It can even be used to run Linux on the T2 or play Doom on a MacBook Pro’s Touch Bar. The jailbreak could also be weaponized by malicious hackers, though, to disable macOS security features like System Integrity Protection and Secure Boot and install malware. Combined with another T2 vulnerability that was publicly disclosed in July by the Chinese security research and jailbreaking group Pangu Team, the jailbreak could also potentially be used to obtain FileVault encryption keys and to decrypt user data. The vulnerability is unpatchable, because the flaw is in low-level, unchangeable code for hardware.

“The T2 is meant to be this little secure black box in Macs—a computer inside your computer, handling things like Lost Mode enforcement, integrity checking, and other privileged duties,” says Will Strafach, a longtime iOS researcher and creator of the Guardian Firewall app for iOS. “So the significance is that this chip was supposed to be harder to compromise—but now it’s been done.”

Apple did not respond to WIRED’s requests for comment.

There are a few important limitations of the jailbreak, though, that keep this from being a full-blown security crisis. The first is that an attacker would need physical access to target devices in order to exploit them. The tool can only run off of another device over USB. This means hackers can’t remotely mass-infect every Mac that has a T2 chip. An attacker could jailbreak a target device and then disappear, but the compromise isn’t “persistent”; it ends when the T2 chip is rebooted. The Checkra1n researchers do caution, though, that the T2 chip itself doesn’t reboot every time the device does. To be certain that a Mac hasn’t been compromised by the jailbreak, the T2 chip must be fully restored to Apple’s defaults. Finally, the jailbreak doesn’t give an attacker instant access to a target’s encrypted data. It could allow hackers to install keyloggers or other malware that could later grab the decryption keys, or it could make it easier to brute-force them, but Checkra1n isn’t a silver bullet.

“There are plenty of other vulnerabilities, including remote ones that undoubtedly have more impact on security,” a Checkra1n team member tweeted on Tuesday.

In a discussion with WIRED, the Checkra1n researchers added that they see the jailbreak as a necessary tool for transparency about T2. “It’s a unique chip, and it has differences from iPhones, so having open access is useful to understand it at a deeper level,” a group member said. “It was a complete black box before, and we are now able to look into it and figure out how it works for security research.”

The exploit also comes as little surprise; it’s been apparent since the original Checkm8 discovery last year that the T2 chip was also vulnerable in the same way. And researchers point out that while the T2 chip debuted in 2017 in top-tier iMacs, it only recently rolled out across the entire Mac line. Older Macs with a T1 chip are unaffected. Still, the finding is significant because it undermines a crucial security feature of newer Macs.

Jailbreaking has long been a gray area because of this tension. It gives users freedom to install and modify whatever they want on their devices, but it is achieved by exploiting vulnerabilities in Apple’s code. Hobbyists and researchers use jailbreaks in constructive ways, including to conduct more security testing and potentially help Apple fix more bugs, but there’s always the chance that attackers could weaponize jailbreaks for harm.

“I had already assumed that since T2 was vulnerable to Checkm8, it was toast,” says Patrick Wardle, an Apple security researcher at the enterprise management firm Jamf and a former NSA researcher. “There really isn’t much that Apple can do to fix it. It’s not the end of the world, but this chip, which was supposed to provide all this extra security, is now pretty much moot.”

Wardle points out that for companies that manage their devices using Apple’s Activation Lock and Find My features, the jailbreak could be particularly problematic both in terms of possible device theft and other insider threats. And he notes that the jailbreak tool could be a valuable jumping off point for attackers looking to take a shortcut to developing potentially powerful attacks. “You likely could weaponize this and create a lovely in-memory implant that, by design, disappears on reboot,” he says. This means that the malware would run without leaving a trace on the hard drive and would be difficult for victims to track down.

The situation raises much deeper issues, though, with the basic approach of using a special, trusted chip to secure other processes. Beyond Apple’s T2, numerous other tech vendors have tried this approach and had their secure enclaves defeated, including Intel, Cisco, and Samsung.

“Building in hardware ‘security’ mechanisms is just always a double-edged sword,” says Ang Cui, founder of the embedded device security firm Red Balloon. “If an attacker is able to own the secure hardware mechanism, the defender usually loses more than they would have if they had built no hardware. It’s a smart design in theory, but in the real world it usually backfires.”

In this case, you’d likely have to be a very high-value target to register any real alarm. But hardware-based security measures do create a single point of failure that the most important data and systems rely on. Even if the Checkra1n jailbreak doesn’t provide unlimited access for attackers, it gives them more than anyone would want.

This story originally appeared on wired.com.

Chinese-made drone app in Google Play spooks security researchers

A DJI Phantom 4 quadcopter drone.

Enlarge / A DJI Phantom 4 quadcopter drone.

The Android version of DJI Go 4—an app that lets users control drones—has until recently been covertly collecting sensitive user data and can download and execute code of the developers’ choice, researchers said in two reports that question the security and trustworthiness of a program with more than 1 million Google Play downloads.

The app is used to control and collect near real-time video and flight data from drones made by China-based DJI, the world’s biggest maker of commercial drones. The Play Store shows that it has more than 1 million downloads, but because of the way Google discloses numbers, the true number could be as high as 5 million. The app has a rating of three-and-a-half stars out of a possible total of five from more than 52,000 users.

Wide array of sensitive user data

Two weeks ago, security firm Synactive reverse-engineered the app. On Thursday, fellow security firm Grimm published the results of its own independent analysis. At a minimum, both found that the app skirted Google terms and that, until recently, the app covertly collected a wide array of sensitive user data and sent it to servers located in mainland China. A worst-case scenario is that developers are abusing hard-to-identify features to spy on users.

According to the reports, the suspicious behaviors include:

  • The ability to download and install any application of the developers’ choice through either a self-update feature or a dedicated installer in a software development kit provided by China-based social media platform Weibo. Both features could download code outside of Play, in violation of Google’s terms.
  • A recently removed component that collected a wealth of phone data including IMEI, IMSI, carrier name, SIM serial Number, SD card information, OS language, kernel version, screen size and brightness, wireless network name, address and MAC, and Bluetooth addresses. These details and more were sent to MobTech, maker of a software developer kit used until the most recent release of the app.
  • Automatic restarts whenever a user swiped the app to close it. The restarts cause the app to run in the background and continue to make network requests.
  • Advanced obfuscation techniques that make third-party analysis of the app time-consuming.

This month’s reports come three years after the US Army banned the use of DJI drones for reasons that remain classified. In January, the Interior Department grounded drones from DJI and other Chinese manufacturers out of concerns data could be sent back to the mainland.

DJI officials said the researchers found “hypothetical vulnerabilities” and that neither report provided any evidence that they were ever exploited.

“The app update function described in these reports serves the very important safety goal of mitigating the use of hacked apps that seek to override our geofencing or altitude limitation features,” they wrote in a statement. Geofencing is a virtual barrier that the Federal Aviation Administration or other authorities bar drones from crossing. Drones use GPS, Bluetooth, and other technologies to enforce the restrictions.

A Google spokesman said the company is looking into the reports. The researchers said the iOS version of the app contained no obfuscation or update mechanisms.

Obfuscated, acquisitive, and always on

In several respects, the researchers said, DJI Go 4 for Android mimicked the behavior of botnets and malware. Both the self-update and auto-install components, for instance, call a developer-designated server and await commands to download and install code or apps. The obfuscation techniques closely resembled those used by malware to prevent researchers from discovering its true purpose. Other similarities were an always-on status and the collection of sensitive data that wasn’t relevant or necessary for the stated purpose of flying drones.

Making the behavior more concerning is the breadth of permissions required to use the app, which include access to contacts, microphone, camera, location, storage, and the ability to change network connectivity. Such sprawling permissions meant that the servers of DJI or Weibo, both located in a country known for its government-sponsored espionage hacking, had almost full control over users’ devices, the researchers said.

Both research teams said they saw no evidence the app installer was ever actually used, but they did see the automatic update mechanism trigger and download a new version from the DJI server and install it. The download URLs for both features are dynamically generated, meaning they are provided by a remote server and can be changed at any time.

The researchers from both firms conducted experiments that showed how both mechanisms could be used to install arbitrary apps. While the programs were delivered automatically, the researchers still had to click their approval before the programs could be installed.

Both research reports stopped short of saying the app actually targeted individuals, and both noted that the collection of IMSIs and other data had ended with the release of current version 4.3.36. The teams, however, didn’t rule out the possibility of nefarious uses. Grimm researchers wrote:

In the best case scenario, these features are only used to install legitimate versions of applications that may be of interest to the user, such as suggesting additional DJI or Weibo applications. In this case, the much more common technique is to display the additional application in the Google Play Store app by linking to it from within your application. Then, if the user chooses to, they can install the application directly from the Google Play Store. Similarly, the self-updating components may only be used to provide users with the most up-to-date version of the application. However, this can be more easily accomplished through the Google Play Store.

In the worst case, these features can be used to target specific users with malicious updates or applications that could be used to exploit the user’s phone. Given the amount of user’s information retrieved from their device, DJI or Weibo would easily be able to identify specific targets of interest. The next step in exploiting these targets would be to suggest a new application (via the Weibo SDK) or update the DJI application with a customized version built specifically to exploit their device. Once their device has been exploited, it could be used to gather additional information from the phone, track the user via the phone’s various sensors, or be used as a springboard to attack other devices on the phone’s WiFi network. This targeting system would allow an attacker to be much stealthier with their exploitation, rather than much noisier techniques, such as exploiting all devices visiting a website.

DJI responds

DJI officials have published an exhaustive and vigorous response that said that all the features and components detailed in the reports either served legitimate purposes or were unilaterally removed and weren’t used maliciously.

“We design our systems so DJI customers have full control over how or whether to share their photos, videos and flight logs, and we support the creation of industry standards for drone data security that will provide protection and confidence for all drone users,” the statement said. It provided the following point-by-point discussion:

  • When our systems detect that a DJI app is not the official version – for example, if it has been modified to remove critical flight safety features like geofencing or altitude restrictions – we notify the user and require them to download the most recent official version of the app from our website. In future versions, users will also be able to download the official version from Google Play if it is available in their country. If users do not consent to doing so, their unauthorized (hacked) version of the app will be disabled for safety reasons.
  • Unauthorized modifications to DJI control apps have raised concerns in the past, and this technique is designed to help ensure that our comprehensive airspace safety measures are applied consistently.
  • Because our recreational customers often want to share their photos and videos with friends and family on social media, DJI integrates our consumer apps with the leading social media sites via their native SDKs. We must direct questions about the security of these SDKs to their respective social media services. However, please note that the SDK is only used when our users proactively turn it on.
  • DJI GO 4 is not able to restart itself without input from the user, and we are investigating why these researchers claim it did so. We have not been able to replicate this behavior in our tests so far.
  • The hypothetical vulnerabilities outlined in these reports are best characterized as potential bugs, which we have proactively tried to identify through our Bug Bounty Program, where security researchers responsibly disclose security issues they discover in exchange for payments of up to $30,000. Since all DJI flight control apps are designed to work in any country, we have been able to improve our software thanks to contributions from researchers all over the world, as seen on this list.
  • The MobTech and Bugly components identified in these reports were previously removed from DJI flight control apps after earlier researchers identified potential security flaws in them. Again, there is no evidence they were ever exploited, and they were not used in DJI’s flight control systems for government and professional customers.
  • The DJI GO4 app is primarily used to control our recreational drone products. DJI’s drone products designed for government agencies do not transmit data to DJI and are compatible only with a non-commercially available version of the DJI Pilot app. The software for these drones is only updated via an offline process, meaning this report is irrelevant to drones intended for sensitive government use. A recent security report from Booz Allen Hamilton audited these systems and found no evidence that the data or information collected by these drones is being transmitted to DJI, China, or any other unexpected party.
  • This is only the latest independent validation of the security of DJI products following reviews by the U.S. National Oceanic and Atmospheric Administration, U.S. cybersecurity firm Kivu Consulting, the U.S. Department of Interior and the U.S. Department of Homeland Security.
  • DJI has long called for the creation of industry standards for drone data security, a process which we hope will continue to provide appropriate protections for drone users with security concerns. If this type of feature, intended to assure safety, is a concern, it should be addressed in objective standards that can be specified by customers. DJI is committed to protecting drone user data, which is why we design our systems so drone users have control of whether they share any data with us. We also are committed to safety, trying to contribute technology solutions to keep the airspace safe.

Don’t forget the Android app mess

The research and DJI’s response underscore the disarray of Google’s current app procurement system. Ineffective vetting, the lack of permission granularity in older versions of Android, and the openness of the operating system make it easy to publish malicious apps in the Play Store. Those same things also make it easy to mistake legitimate functions for malicious ones.

People who have DJI Go 4 for Android installed may want to remove it at least until Google announces the results of its investigation (the reported automatic restart behavior means it’s not sufficient to simply curtail use of the app for the time being). Ultimately, users of the app find themselves in a similar position as that of TikTok, which has also aroused suspicions, both because of some behavior considered sketchy by some and because of its ownership by China-based ByteDance.

There’s little doubt that plenty of Android apps with no ties to China commit similar or worse infractions than those attributed to DJI Go 4 and TikTok. People who want to err on the side of security should steer clear of a large majority of them.

Microsoft is adding Linux, Android, and firmware protections to Windows

Screenshot of antivirus protection.

Microsoft is moving forward with its promise to extend enterprise security protections to non-Windows platforms with the general release of a Linux version and a preview of one for Android. The software maker is also beefing up Windows security protections to scan for malicious firmware.

The Linux and Android moves—detailed in posts published on Tuesday here, here, and here—follow a move last year to ship antivirus protections to macOS. Microsoft disclosed the firmware feature last week.

Premium pricing

All the new protections are available to users of Microsoft Advanced Threat Protection and require Windows 10 Enterprise Edition. Public pricing from Microsoft is either non-existent or difficult to find, but according to this site, costs range from $30 to $72 per machine per year to enterprise customers.

In February, when the Linux preview became available, Microsoft said it included antivirus alerts and “preventive capabilities.” Using a command line, admins can manage user machines, initiate and configure antivirus scans, monitor network events, and manage various threats.

“We are just at the beginning of our Linux journey and we are not stopping here!” Tuesday’s post announcing the Linux general availability said. “We are committed to continuous expansion of our capabilities for Linux and will be bringing you enhancements in the coming months.”

The Android preview, meanwhile, provides several protections, including:

  • The blocking of phishing sites and other high-risk domains and URLs accessed through SMS/text, WhatsApp, email, browsers, and other apps. The features use the same Microsoft Defender SmartScreen services that are already available for Windows so that decisions to block suspicious sites will apply across all devices on a network.
  • Proactive scanning for malicious or potentially unwanted applications and files that may be downloaded to a mobile device.
  • Measures to block access to network resources when devices show signs of being compromised with malicious apps or malware.
  • Integration to the same Microsoft Defender Security Center that’s already available for Windows, macOS, and Linux.

Last week, Microsoft said it had added firmware protection to the premium Microsoft Defender. The new offering scans Unified Extensible Firmware Interface, which is the successor to the traditional BIOS that most computers used during the boot process to locate and enumerate hardware installed.

The firmware scanner uses a new component added to virus protection already built into Defender. Hacks that infect firmware are particularly pernicious because they survive reinstallations of the operating system and other security measures. And because firmware runs before Windows starts, it has the ability to burrow deep into an infected system. Until now, there have been only limited ways to detect such attacks on large fleets of machines.

It makes sense that the extensions to non-Windows platforms are available only to enterprises and cost extra. I was surprised, however, that Microsoft is charging a premium for the firmware protection and only offering it to enterprises. Plenty of journalists, attorneys, and activists are equally if not more threatened by so-called evil maid attacks, in which a housekeeper or other stranger has the ability to tamper with firmware during brief physical access to a computer.

Microsoft has a strong financial incentive to make Windows secure for all users. Company representatives didn’t respond to an email asking if the firmware scanner will become more widely available.

Intel will soon bake anti-malware defenses directly into its CPUs

A mobile PC processor code-named Tiger Lake. It will be the first CPU to offer a security capability known as Control-Flow Enforcement Technology.

Enlarge / A mobile PC processor code-named Tiger Lake. It will be the first CPU to offer a security capability known as Control-Flow Enforcement Technology.
Intel

The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.

Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.

ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.

ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.

Very effective

Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.

CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.

“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”

Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.

Those who do not remember the past

Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.

A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.

It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:

One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.

Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.

Google fixes Android flaws that allow code execution with high system rights

Google fixes Android flaws that allow code execution with high system rights

Google has shipped security patches for dozens of vulnerabilities in its Android mobile operating system, two of which could allow hackers to remotely execute malicious code with extremely high system rights.

In some cases, the malware could run with highly elevated privileges, a possibility that raises the severity of the bugs. That’s because the bugs, located in the Android System component, could enable a specially crafted transmission to execute arbitrary code within the context of a privileged process. In all, Google released patches for at least 34 security flaws, although some of the vulnerabilities were present only in devices available from manufacturer Qualcomm.

Anyone with a mobile device should check to see if fixes are available for their device. Methods differ by device model, but one common method involves either checking the notification screen or clicking Settings > Security > Security update. Unfortunately, patches aren’t available for many devices.

Two vulnerabilities ranked as critical in Google’s June security bulletin are indexed as CVE-2020-0117 and CVE-2020-8597. They’re among four System flaws located in the Android system (the other two are ranked with a severity of high). The critical vulnerabilities reside in Android versions 8 through the most recent release of 11.

“These vulnerabilities could be exploited through multiple methods such as email, web browsing, and MMS when processing media files,” an advisory from the Department of Homeland Security-funded Multi-State-Information Sharing and Analysis Center said. “Depending on the privileges associated with the application, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

Vulnerabilities with a severity rating of high affected the Android media framework, the Android framework, and the Android kernel. Other vulnerabilities were contained in components shipped in devices from Qualcomm. The two Qualcomm-specific critical flaws reside in closed source components. The severity of other Qualcomm flaws were rated as high.

US Senate tells members not to use Zoom

Photograph of US Capitol building.

The US Senate has become the latest organization to tell its members not to use Zoom because of concerns about data security on the video conferencing platform that has boomed in popularity during the coronavirus crisis.

The Senate sergeant at arms has warned all senators against using the service, according to three people briefed on the advice.

One person who had seen the Senate warning said it told each senator’s office to find an alternative platform to use for remote working while many parts of the US remain in lockdown. But the person added it had stopped short of officially banning the company’s products.

Zoom is battling to stem a public and regulatory backlash over lax privacy practices and rising harassment on the platform that has sent its stock plummeting. The company’s shares have fallen more than 25 per cent from highs just two weeks ago, to trade at $118.91.

Zoom was forced to apologize publicly last week for making misleading statements about the strength of its encryption technology, which is intended to stop outside parties from seeing users’ data.

The company also admitted to “mistakenly” routing user data through China over the past month to cope with a dramatic rise in traffic. Zoom has two servers and a 700-strong research and development arm in China. It had stated that users’ meeting information would stay in the country in which it originated.

The revelations triggered complaints from US senators, several of whom urged the Federal Trade Commission to investigate whether the company had broken consumer protection laws. It also prompted the Taiwanese government to ban Zoom for official business.

The FBI warned last month that it had received reports that teleconferences were being hacked by people sharing pornographic messages or using abusive language — a practice that has become known as “Zoombombing.”

A spokesperson for the company said: “Zoom is working around-the-clock to ensure that universities, schools, and other businesses around the world can stay connected and operational during this pandemic, and we take user privacy, security and trust extremely seriously.

“We appreciate the outreach we have received on these issues from various elected officials and look forward to engaging with them.”

However, the US Department of Homeland Security said in a memo to government cyber security officials that the company was actively responding to concerns and understood how grave they were, according to Reuters. The Pentagon told the Financial Times it would continue to allow its personnel to use Zoom.

The Senate move follows similar decisions by companies including Google, which last week decided to stop employees from downloading the app for work.

“Recently, our security team informed employees using Zoom Desktop Client that it will no longer run on corporate computers as it does not meet our security standards for apps used by our employees,” Jose Castaneda, a Google spokesperson, said. However, he added that employees wanting to use Zoom to stay in touch with family and friends on their mobiles or via a web browser could do so.

The Google decision was first reported by BuzzFeed.

Zoom has tried to stem the tide of criticism in recent days. The company said on Wednesday it had hired Alex Stamos, the former Facebook security chief, as an outside security consultant, days after saying it would redirect its engineering resources to tackle security and privacy issues.

© 2020 The Financial Times LtdAll rights reserved. Not to be redistributed, copied, or modified in any way.

Security tips every teacher and professor needs to know about Zoom, right now

Children take part in a video conference on a large TV.

With the coronavirus pandemic forcing millions of people to work, learn, and socialize from home, Zoom conferences are becoming a default method to connect. And with popularity comes abuse. Enter Zoom-bombing, the phenomenon of trolls intruding into other people’s meetings for the sole purpose of harassing attendees, usually by bombarding them with racist or sexually explicit images or statements. A small sample of the events over the past few days:

  • An attendee who disrupted an Alcohol Anonymous meeting by shouting misogynistic and anti-Semitic slurs, along with the statement “Alcohol is soooo good,” according to Business Insider. Meeting organizers eventually muted and removed the intruder but only after more than half of the participants had left.
  • A Zoom conference hosting students from the Orange County Public Schools system in Florida that was disrupted after an uninvited participant exposed himself to the class.
  • An online meeting of black students at the University of Texas that was cut short when it was interrupted by visitors using racial slurs.

The basics

As disruptive and offensive as it is, Zoom-bombing is a useful reminder of just how fragile privacy can be in the world of online conferencing. Whereas usual meetings among faculty members, boards of directors, and employees are protected by physical barriers such as walls and closed doors, Zoom conferences can only be secured using other means that many users are unversed in using. What follows are tips for avoiding the most common Zoom conference pitfalls.

Make sure meetings are password protected. The best way to ensure meetings can be accessed only when someone has the password is to ensure that Require a password for instant meetings is turned on in the user settings. Even when the setting is turned off, there’s the ability to require a password when scheduling a meeting. It may not be practical to password protect every meeting, but conference organizers should use this measure as often as possible.

When possible, don’t announce meetings on social media or other public outlets. Instead, send messages only to the participants, using email or group settings in Signal, WhatsApp, or other messenger programs. This advice is especially important if you’re the leader of a country, such as the UK. (Fortunately, Prime Minister Boris Johnson had password-protected the meeting and was prudent enough not to have included the passphrase in his tweet. Even then, his tweet divulged the IDs of multiple participants.)

Carefully inspect the list of participants periodically, whenever possible. This can be done by the organizer or trusted participants. Any users who are unauthorized can be booted. (More about how to do that later.)

Carefully control screen sharing. The user settings allow organizers to set sharing settings by default. People who rarely need sharing should turn it off altogether by sliding the button to the right to off. In the event participants require screen sharing, the slider should be turned on and the setting for only the host to share should be turned on. Organizers should allow all participants to share screens only when the host knows and fully trusts everyone in a meeting.

And while you’re at it

The four measures above are cardinal. Here are a few other suggestions for securing Zoom meetings:

Disable the Join Before Host setting so that organizers can control the meeting from its very start.

Use the Waiting Room option to admit participants. This will prevent admittance of trolls should they have slipped through the two cardinal defenses.

Lock a meeting, when possible, once it’s underway. This will prevent unauthorized people from joining later. Locking a meeting can be accomplished by clicking Manage Participants and using the controls that appear on the right of the meeting window. Manage Participants also allows an organizer to mute all participants, eject select participants, or stop select participants from appearing by video.

Be aware of everything that’s within view of your camera. Whether working from home or an office, there may be diagrams, drawings, notes, or other things you don’t want other participants to see. Remove these from view of the camera before the meeting starts.

Beyond the above advice, Zoom users should consider using a browser to connect to meetings rather than the dedicated Zoom app. I prefer this setting because I believe the attack surface on my system—that is, the number of vulnerabilities a hacker can exploit to breach my security—grows with each app I install. In 2020, most browsers are hardened against attacks. Other types of software are less so.

Zoom makes the Web option difficult to find after clicking on the Join a Meeting link. In my testing on a Windows 10 machine, the option appeared only after I uninstalled the Zoom client. Even then, Zoom pushed an installation file after I tried to join a meeting. I was able to use the browser only after refusing the download and choosing Join from your browser. On a Mac, I was able to find the option, even when I had the Zoom client installed, by clicking cancel on the app installation dialog box. A Chrome extension called Zoom Redirector will also make it easy to find the link (Firefox and Edge versions of the open source addon are here). The permissions required by the extension suggest it’s not much of a privacy or security threat.

Users opting for the browser option will have the best results if they use Chrome. Firefox and other browsers will prevent some key features, such as audio and video, from working at all. As a courtesy, meeting organizers can choose a setting that can make it easier for participants to find the option.

Fortunately, Zoom has disabled an attention-tracking feature that allowed organizers to tell when a participant didn’t have the meeting in focus for more than 30 seconds, for instance, because the participant switched to a different browser tab. This capability was intrusive. It’s great that Zoom removed it.

Rental cars can be remotely started, tracked, and more after customers return them

Photograph of Ford Mustang combined with image of automobile controls.

Enlarge / The screen displayed by FordPass four days after an Enterprise Rent-A-Car customer returned his Ford Mustang.
Masamba Sinclair

In October, Ars chronicled the story of a man who was able to remotely start, stop, lock, unlock, and track a Ford explorer he rented and returned five months earlier. Now, something almost identical has happened again to the same Enterprise Rent-A-Car customer. Four days after returning a Ford Mustang, the FordPass app installed on the phone of Masamba Sinclair continues to give him control of the car.

Like the last time, Sinclair could track the car’s location at any given time. He could start and stop the engine and lock and unlock its doors. Enterprise only removed Sinclair’s access to the car on Wednesday, more than three hours after I informed the rental agency of the error.

“It looks like someone else has rented it and it’s currently at a golf resort,” Sinclair wrote on Tuesday in an email. “This car is LOUD so starting the engine will definitely start people asking a lot of questions.” On Wednesday, before his access was removed, he added: “Looks like the previous rental is over and it’s back at the Enterprise parking lot.” Below is a video demonstrating the control he had until then.

[embedded content]
FordPass access.

We take security and privacy seriously

In October, both Enterprise and Ford said they had mechanisms in place to ensure that FordPass, and other remote apps provided by Ford, were unpaired before vehicles were sold or rented to new customers. The responses were problematic for several reasons. Enterprise, for instance, said rental agreements that customers sign remind them to wipe their data from cars upon their return. The problem is that the reminder doesn’t warn renters of the risks that come when a previous customer’s app remains paired to the vehicle they are renting.

What’s more, customers have little incentive to unpair the app from a car they’re returning. Customers are often scrambling to catch flights and may not want to be bothered searching through menus they’ve never seen before. And since the privacy and security risks fall solely on the new customer, nefarious people returning the car may want to maintain remote access. Unpairing the app by rental agency employees should be standard practice when cars are returned, one that’s no different from vacuuming the car’s carpet or checking its engine.

Ford, meanwhile, maintained that there are several ways drivers can detect when an app has access to their vehicle. The car maker also said it reminds dealerships to unpair cars before being resold.

None of those measures appears to adequately address the risk stemming from people continuing to have control over vehicles after the vehicles have been rented or sold to new customers. Sinclair agrees that he had the ability to unpair his device himself. He said he didn’t do that because he wanted to test the safety procedures put in place by the companies that use and develop the app. An article published last week by KrebsOnSecurity—recounting a man who continued to have remote access to a Ford Focus four years after his lease expired—suggests the problem isn’t isolated.

The problem isn’t that there’s no way to remove previous renters’ or owner’s access to a paired vehicle. Ford vehicles, for instance, display a label on a dashboard screen whenever location sharing, remote start/stop, and remote lock/unlock are active. Popups will also appear on each ignition when location services are active and no known paired Bluetooth devices are detected. The messages can solve the problem only if they’re prominent and clear enough that users recognize the risk. Asked for comment, a Ford spokesman said that the notifications he described in October remained in effect.

Enterprise officials, meanwhile, provided the following statement:

The safety and privacy of our customers is an important priority for us as a company. We appreciate this being brought to our attention and we are actively working to follow up on the issue related to this specific rental that took place last week.

Following the outreach last fall, we updated our car cleaning guidelines related to our master reset procedure. Additionally, we instituted a frequent secondary audit process in coordination with Ford. We also started working with Ford and are very near the completion of testing software with them that will automate the prevention of FordPass pairing by rental customers.

We will use this latest experience as we continue evolving our processes to ensure they best address features and technologies that are continually being added to vehicles.

Vehicles from other manufacturers are likely to have similar features, and like the features provided by Ford, they’re probably easy for many drivers to miss. People renting or buying new cars would do well to read the manuals carefully to learn precisely how remote access works and how to ensure it’s removed from previous customers.

Not so IDLE hands: FBI program offers companies data protection via deception

The FBI's IDLE program uses "obfuscated" data to hide real data from hackers and insider threats, making data theft harder and giving security teams a tool to spot illicit access.

Enlarge / The FBI’s IDLE program uses “obfuscated” data to hide real data from hackers and insider threats, making data theft harder and giving security teams a tool to spot illicit access.
Getty Images

The Federal Bureau of Investigations is in many ways on the front lines of the fight against both cybercrime and cyber-espionage in the US. These days, the organization responds to everything from ransomware attacks to data thefts by foreign government-sponsored hackers. But the FBI has begun to play a role in the defense of networks before attacks have been carried out as well, forming partnerships with some companies to help prevent the loss of critical data.

Sometimes, that involves field agents proactively contacting companies when they have information of a threat—as two FBI agents did when they caught wind of researchers trying to alert casinos of vulnerabilities they said they had found in casino kiosk systems. “We have agents in every field office spending a large amount of time going out to companies in their area of responsibility establishing relationships,” Long T. Chu, acting assistant section chief for the FBI’s Cyber Engagement and Intelligence Section, told Ars. “And this is really key right now—before there’s a problem, providing information to help these companies prepare their defenses. And we try to provide as specific information as we can.”

But the FBI is not stopping its consultative role at simply alerting companies to threats. An FBI flyer shown to Ars by a source broadly outlined a new program aimed at helping companies fight data theft “caused by an insider with illicit access (or systems administrator), or by a remote cyber actor.” The program, called IDLE (Illicit Data Loss Exploitation), does this by creating “decoy data that is used to confuse illicit… collection and end use of stolen data.” It’s a form of defensive deception—or as officials would prefer to refer to it, obfuscation—that the FBI hopes will derail all types of attackers, particularly advanced threats from outside and inside the network.

Going proactive

A recent FBI Private Industry Notification (PIN) warned of social engineering attacks targeting two-factor authentication.

Enlarge / A recent FBI Private Industry Notification (PIN) warned of social engineering attacks targeting two-factor authentication.

In a discussion about the FBI’s overall philosophy on fighting cybercrime, Chu told Ars that the FBI is “taking more of a holistic approach” these days. Instead of reacting to specific events or criminal actors, he said, “we’re looking at cyber crime from a key services aspect”—aka, what are the things that cybercriminals target?—”and how that affects the entire cyber criminal ecosystem. What are the centers of gravity, what are the key services that play into that?”

In the past, the FBI got involved only when a crime was reported. But today, the new approach means playing more of a consultative role to prevent cybercrime through partnerships with both other government agencies and the private sector. “If you ever have the opportunity to go to the courtyard at FBI Headquarters, there’s a quote there. ‘The most effective weapon against crime is cooperation, the efforts of all law enforcement and the support and understanding of the American people.’ That can not be more true today, but it expands from beyond just law enforcement to the private sector,” Chu said. “That’s because we’re facing one of the greatest threats that our nation has ever faced, arguably, and that’s the cyber threat.”

An example of that sort of outreach was visible in a case Ars reported on in March—that of the casino kiosk vendor Atrient. FBI Las Vegas field office and FBI Cyber Division agents picked up on Twitter posts about an alleged vulnerability in Atrient’s infrastructure, and the agents connected the company and an affected customer with the researchers to resolve the issue (which, in Atrient’s case at least, went somewhat awry). But in these situations, the FBI now also shares information it gathers from other sources, including data gathered from ongoing investigations.

Sharing happens a lot faster, Chu said, when there’s a “preexisting relationship with our partners, so we know exactly who we need to call and vice versa.” And information flows faster when it goes both ways. “Just as we’re trying hard to get the private industry information as fast as possible, it’d be a lot more effective if we’re getting information from the private industry as well,” he said. Exchanging information about IP addresses, indicators of compromise, and other threat data allows the FBI to aggregate the data, “run that against our databases and all our resources, and come up with a much stronger case, so to speak, against our adversaries,” Chu noted, “along with trying to attribute or identify who did it will prevent further attacks from happening.”

Some information sharing takes the form of collaboration with industry information sharing and analysis centers (ISACs) and “Flash” and “Private Industry Notice” (PIN) alerts on cybercrime issues. And to build more direct relationships with companies’ security executives, the FBI also offers a “CISO Academy” for chief information security officers twice a year at the FBI Academy in Quantico, Virginia. Attendees are indoctrinated on the FBI’s investigation approaches, and they learn what kind of evidence needs to be preserved to help spur investigations forward.

But for some sectors of particular interest, the FBI is now trying to get a deeper level of collaboration going—especially with companies in the defense industry base (DIB) and other critical infrastructure industries. The FBI sees these areas as crucial industry-spanning networks, and it hopes to build a defense in-depth against cyber-espionage, intellectual property theft, and exposure of other data that could be used particularly by other nations in a way that could impact national security or the economy.
That’s precisely where IDLE comes in.

PoS malware skimmed convenience store customers’ card data for 8 months

Promotional image of gas station.

US convenience store Wawa said on Thursday that it recently discovered malware that skimmed customers’ payment card data at just about all of its 850 stores.

The infection began rolling out to the store’s payment-processing system on March 4 and wasn’t discovered until December 10, an advisory published on the company’s website said. It took two more days for the malware to be fully contained. Most locations’ point-of-sale systems were affected by April 22, 2019, although the advisory said some locations may not have been affected at all.

The malware collected payment card numbers, expiration dates, and cardholder names from payment cards used at “potentially all Wawa in-store payment terminals and fuel dispensers.” The advisory didn’t say how many customers or cards were affected. The malware didn’t access debit card PINs, credit card CVV2 numbers, or driver license data used to verify age-restricted purchases. Information processed by in-store ATMs was also not affected. The company has hired an outside forensics firm to investigate the infection.

Thursday’s disclosure came after Visa issued two security alerts—one in November and another this month—warning of payment-card-skimming malware at North American gasoline pumps. Card readers at self-service fuel pumps are particularly vulnerable to skimming because they continue to read payment data from cards’ magnetic stripes rather than card chips, which are much less susceptible to skimmers.

In the November advisory, Visa officials wrote:

The recent attacks are attributed to two sophisticated criminal groups with a history of large-scale, successful compromises against merchants in various industries. The groups gain access to the targeted merchant’s network, move laterally within the network using malware toolsets, and ultimately target the merchant’s POS environment to scrape payment card data. The groups also have close ties with the cybercrime underground and are able to easily monetize the accounts obtained in these attacks by selling the accounts to the top tier cybercrime underground carding shops.

The December advisory said that two of three attacks bore the hallmarks of Fin8, an organized cybercrime group that has targeted retailers since 2016. There’s no indication the Wawa infections have any connection to the ones in the Visa advisories.

People who have used payment cards at a Wawa location should pay close attention to billing statements over the past eight months. It’s always a good idea to regularly review credit reports as well. Wawa said it will provide one year of identity-theft protection and credit monitoring from credit-reporting service Experian at no charge. Thursday’s disclosure lists other steps card holders can take.

Dark Overlord taunted, threatened, and extorted. Now alleged member is behind bars

Dark Overlord taunted, threatened, and extorted. Now alleged member is behind bars

Federal authorities say they have taken custody of a UK man who was a member of The Dark Overlord, a group that has taken credit for hacking into more than a dozen companies, stolen valuable data, and then demanded ransoms for its return. Stolen material included then-unreleased episodes of popular television shows and millions of patient records.

Nathan Wyatt, 39, was extradited from the United Kingdom to St. Louis, Missouri, after losing a year-long legal fight to block the transfer. Wyatt was arraigned in US District Court for the Eastern District of Missouri on Wednesday. He pleaded not guilty.

An indictment unsealed in the case alleged Wyatt participated in hacks on three healthcare providers, a medical records company, and an accounting firm. The indictment said Wyatt conspired with other members of The Dark Overlord to hack into the companies, steal their valuable data, and threaten to publish it unless they received payments in bitcoin.

Taunts

The hackers allegedly contacted executives of the hacked companies by email and SMS text messages. When hacked companies were slow to pay the ransoms, the messages often contained threats or taunts. In July 2017, a member of the group sent a text to the daughter of an owner of one of the healthcare providers. One text read: “Hi… you look peaceful… by the way did your daddy tell you he refused to pay us when we stole his company files in 4 days we will be releasing for sale thousands of patient info. Including yours…”

Prosecutors said that Wyatt registered two phone numbers used in the crimes. One number registered a VPN account and Twitter account used in the scheme. The other number sent threatening and extortionate messages to hacked parties.

The indictment made no mention of more than a dozen other hacks that matched the same mode of operation and for which the group took credit. Among them was the release of nine episodes of Orange Is the New Black in April 2017. At the time, the episodes were unavailable on Netflix. According to DataBreaches.net, which has extensively covered The Dark Overlord hacks, the group managed to breach Larson Studios, a post-production facility, and make off with the following TV shows or movies:

  • A Midsummer’s Nightmare – TV Movie
  • Above Suspicion – Film
  • Bill Nye Saves the World – TV Series
  • Breakthrough – TV Series
  • Brockmire – TV Series
  • Bunkd – TV Series
  • Celebrity Apprentice (The Apprentice) – TV Series
  • Food Fact or Fiction – TV Series
  • Handsome – Film
  • Hopefuls – TV Series
  • Hum – Short
  • It’s Always Sunny in Philadelphia – TV Series
  • Jason Alexander Project – TV Series
  • Liza Koshy Special – YoutubeRed
  • Lucha Underground – TV Series
  • Lucky Roll – TV Series
  • Making History – TV Series
  • Man Seeking Woman – TV Series
  • Max and Shred – TV Series
  • Mega Park – TV Series
  • NCIS Los Angeles – TV Series
  • New Girl – TV Series

DataBreaches.net reported that The Dark Overlord was behind hacks on more than a dozen other companies, including ABC Networks, an insurance firm, a plastic surgery clinic, the maker of Gorilla Glue, a real estate company, and a human resources firm, to name just a few.

Threats

The Daily Beast reported in late 2017 that members of The Dark Overlord sent texts to parents in Iowa threatening to kill their kids. The Courier Journal reported death threats made to middle and high schools in the name of The Dark Overlord.

Last year, according to Bleeping Computer, authorities in Serbia arrested another alleged member of The Dark Overlord. The suspect was identified only by initials, making it hard to track the outcome of the arrest.

Wyatt was reportedly arrested in 2016 in connection to the theft of intimate and nude photos from the iCloud account of Pippa Middleton, sister of Kate Middleton, the Duchess of Cambridge. He was eventually released with no charges filed. In 2017, according to DataBreaches.net, Wyatt pleaded guilty to 20 counts of fraud by false representation, two counts of blackmail, and one count of possession of an identity document with intent to deceive (a false passport).

In January, while in custody in the UK, prosecutors unearthed evidence that Wyatt was involved in extortions carried out by The Dark Overlord. He has been fighting extradition ever since.

Contractor admits planting logic bombs in his software to ensure he’d get new work

Contractor admits planting logic bombs in his software to ensure he’d get new work

Getty Images | ullstein bild

Many IT workers worry their positions will become obsolete as changes in hardware, software, and computing tasks outstrip their skills. A former contractor for Siemens concocted a remedy for that—plant logic bombs in projects he designed that caused them to periodically malfunction. Then wait for a call to come fix things.

On Monday, David A. Tinley, a 62-year-old from Harrison City, Pennsylvania, was sentenced to six months in prison and a fine of $7,500 in the scheme. The sentence came five months after he pleaded guilty to a charge of intentional damage to a protected computer. Tinley was a contract employee for Siemens Corporation at its Monroeville, Pennsylvania, location.

According to a charging document filed in US District Court for the Western District of Pennsylvania, the logic bombs Tinley surreptitiously planted into his projects caused them to malfunction after a certain preset amount of time. Because Siemens managers were unaware of the logic bombs and didn’t know the cause of the malfunctions, they would call Tinley and ask him to fix the misbehaving projects. The scheme ran from 2014 to 2016.

Tinley will be under supervised release for two years following his prison term. He will also pay restitution. The parties in the case stipulated a total loss amount of $42,262.50. Tinley faced as much as 10 years in prison and a $250,000 fine.

Hackers steal data for 15 million patients, then sell it back to lab that lost it

Hackers steal data for 15 million patients, then sell it back to lab that lost it

Canada’s biggest provider of specialty laboratory testing services said it paid hackers an undisclosed amount for the return of personal data they stole belonging to as many as 15 million customers.

Toronto, Ontario-based LifeLabs Notified Canadian authorities of the attack on November 1. The company said a cyberattack struck computer systems that stored data for about 15 million customers. The stolen information included names, addresses, email addresses, customer logins and passwords, health card numbers, and lab tests.

The incident response, company President and CEO Charles Brown said in a statement, included “retrieving the data by making a payment.” The executive added: “We did this in collaboration with experts familiar with cyber-attacks and negotiations with cyber criminals.” The statement didn’t say how much LifeLabs paid for the return of the data. Representatives didn’t immediately respond to an email seeking the amount.

According to an advisory issued by the Office of the Information and Privacy Commissioner of Ontario and the Office of the Information and Privacy Commissioner for British Columbia: “LifeLabs advised our offices that cyber criminals penetrated the company’s systems, extracting data and demanding a ransom. LifeLabs retained outside cybersecurity consultants to investigate and assist with restoring the security of the data.”

LifeLabs said that its investigation so far indicates that the accessed test results were from 2016 or earlier and belonged to about 85,000 customers. Accessed health card information was also from 2016 or earlier. So far, there’s no indication any of the stolen data has been distributed to parties other than LifeLabs.

The LifeLabs statement said that company officials have fixed the system that led to the breach. The company is providing a year of free identity theft monitoring and identity theft insurance. Affected customers can sign up for the help here.

5G deployment stands ready to supercharge the Internet of Things

Two toy robots listen to each other through tin cans connected by a laser beam.

Enlarge / Artist’s impression of automated IoT devices connected via 5G.
Aurich Lawson / Getty

It’s true that inorganic users don’t yell at customer-service reps or trash-talk companies on Twitter. But connected devices can also benefit from some less-obvious upgrades that 5G should deliver—and we, their organic overlords, could profit in the long run.

You may have heard about 5G’s Internet-of-Things potential yourself in such gauzy statements as “5G will make every industry and every part of our lives better” (spoken by Meredith Attwell Baker, president of the wireless trade group CTIA, at the MWC Americas trade show in 2017) and “It’s a wholly new technology ushering in a new era of transformation” (from Ronan Dunne, executive vice president and CEO of Verizon’s consumer group, at 2019’s Web Summit conference).

But as with 5G in the smartphone and home-broadband contexts, the ripple effects alluded to in statements are potentially huge—and they will take years to land on our shores. Yes, you’ve heard this before: the news is big, but it’s still early days.

Massively multiplayer mobile bandwidth

The long-term map for 5G IoT promises to support a density of devices far beyond what current-generation LTE can deliver—up to a million “things” per square kilometer, versus almost 61,000 under today’s 4G. That density will open up possibilities that today would require a horrendous amount of wired connectivity.

For example, precision-controlled factories could take advantage of the space in the airwaves to implement extremely granular monitoring, and 5G IoT promises to do that job for less. “You can put tons of environmental sensors everywhere,” said Recon Analytics founder Roger Entner. “You can put a tag on every piece of equipment.”

An automated robot production line at SIASUN Robot & Automation Co., Ltd. High-density IoT deployments could put monitoring tags on everything in this picture.

Enlarge / An automated robot production line at SIASUN Robot & Automation Co., Ltd. High-density IoT deployments could put monitoring tags on everything in this picture.
China News Service / Getty Images

“Either I upgrade this to fiber to connect the machines, or I use millimeter-wave 5G in the factory,” echoes Rüdiger Schicht, a senior partner with the Boston Consulting Group. “Everything we hear on reliability and manageability of that infrastructure indicates that 5G is superior.”

Millimeter-wave 5G runs on bands of frequencies starting at 24GHz, far above the frequencies employed for LTE. The enormous amounts of free spectrum up there allow for gigabit speeds—at the cost of range, which would be limited to a thousand feet or so. That still exceeds Wi-Fi’s reach, though.

Low-band 5G on the same frequencies today used for 4G doesn’t allow for a massive speed boost but should at least cover far more ground, while mid-band 5G should offer a good mix of speed and coverage—at least, once carriers have more free spectrum on which to provide that coverage. (If you’d like a quick refresher on the various flavors of 5G, our story from a couple of weeks ago has you covered!)

In the United States, fixing those spectrum issues hinges on the Federal Communications Commission’s recently-announced plan to auction off 280MHz of so-called C-band spectrum, between 3.2MHz and 3.98MHz, on a sped-up timetable that could see those bands in service in two to three years.

And that means there’s some time to figure things out. Companies aren’t lighting up connected devices by the millions just yet.

The current 5G standard—formally speaking, 3GPP Release 15—does not include support for the enormous device density we’re talking about. That will have to wait until Release 16, now in its final stages of approval, although Entner warns that we won’t see compatible hardware for at least another year or two.

No-fiber zone: FCC funds 25Mbps, data-capped satellite in rural areas

Illustration of a broadband satellite in space.

Enlarge / Viasat-2, a satellite launched by Viasat in 2017.

The Federal Communications Commission is giving $87.1 million in rural-broadband funding to satellite operator Viasat to help the company lower prices and raise data caps.

The FCC’s Connect America Fund generally pays ISPs to expand their networks into rural areas that lack decent home Internet access. Viasat’s satellite service already provides coverage of 98 percent of the US population in 50 states, so it doesn’t need government funding to expand its network the same way that wireline operators do. But Viasat will use the money to offer Internet service “at lower cost to consumers, while also permitting higher usage allowances, than it typically provides in areas where it is not receiving Connect America Fund support,” the FCC said in its announcement yesterday.

Viasat’s $87.1 million is to be used over the next 10 years “to offer service to more than 121,700 remote and rural homes and businesses in 17 states.” Viasat must provide speeds of at least 25Mbps for downloads and 3Mbps for uploads.

While the funding for Viasat could certainly improve access for some people, the project helps illustrate how dire the broadband shortage is in rural parts of many states. Viasat’s service is generally a last-ditch option for people in areas where there’s no fiber or cable and where DSL isn’t good enough to provide a reasonably fast and stable connection. Viasat customers have to pay high prices for slow speeds and onerous data limits.

Future services relying on low-Earth-orbit satellites from companies such as SpaceX and OneWeb could dramatically boost speeds and data caps while lowering latency. But Viasat’s service still relies on satellites in geostationary orbits about 22,000 miles above the planet and suffer from latency of nearly 600ms, much worse than the 10ms to 20ms from fiber services (as measured in customer homes by the FCC in September 2017). Viasat’s service is classified by the FCC’s Connect America Fund as “high latency,” which is less than or equal to 750ms.

The Connect America Fund is paid for by Americans through fees on their phone bills.

Prices and data caps not revealed

A Viasat spokesperson would not tell us what prices and data caps will be applied to the company’s FCC-subsidized plans. Viasat said it will provide the required 25Mbps service “along with an evolving usage allowance, and at FCC-defined prices, to certain areas, where we will be subject to a new range of federal and state regulations.”

The materials released by the FCC yesterday don’t provide price and data-cap information, either. We contacted the FCC and will update this article if we get any answers.

Viasat’s current prices and data allotments are pretty bad, so hopefully there will be a significant improvement. Plans and pricing vary by ZIP code; offers listed on BroadbandNow include $50 a month for download speeds of up to 12Mbps and only 12GB of “priority data” each month. The price rises after a two-year contract expires.

“Once priority data is used up, speeds will be reduced to up to 1 to 5Mbps during the day and possibly below 1Mbps after 5pm,” BroadbandNow’s summary says. Customers can use data without affecting the limit between 3am and 6am.

Other plans include $75 a month for speeds of 12Mbps and 25GB of priority data; $100 a month for 12Mbps and 50GB; and $150 a month for 25Mbps and “unlimited” data. Even on the so-called unlimited plan, speeds “may be prioritized behind other customers during network congestion” after you use 100GB in a month. Because of these onerous limits, Viasat lowers streaming video quality to reduce data usage. Viasat says it provides speeds of up to 100Mbps but only “in select areas.”

Viasat also charges installation fees, a $10-per-month equipment lease fee, and taxes and surcharges. Viasat offers a two-year price lock, but this does not apply to the taxes and surcharges. In order to avoid signing a two-year contract, you have to pay a $300 “No Long-Term Contract” fee.

Exhume dead cryptocurrency exec who owes us $250 million, creditors demand

Stock photo of a gravedigging machine in front of a headstone.

In late January, the wife of a cryptocurrency-exchange founder testified that her husband inadvertently took at least $137 million of customer assets to the grave when he died without giving anyone the password to his encrypted laptop. Now, outraged investors want to exhume the founder’s body to make sure he’s really dead.

The dubious tale was first reported in February, when the wife of Gerry Cotten, founder the QuadrigaCX cryptocurrency exchange, submitted an affidavit stating he died suddenly while vacationing in India, at the age of 30. The cause: complications of Crohn’s disease, a bowel condition that is rarely fatal. At the time, QuadrigaCX lost control of at least $137 million in customer assets because it was stored on a laptop that—according to the widow’s affidavit—only Cotten knew the password to.

Widow Jennifer Robertson testified that she had neither the password nor the recovery key to the laptop. The laptop, she said, stored the cold wallet—that is, a digital wallet not connected to the Internet—that contained the digital currency belonging to customers of the exchange. In addition to at least $137 million in digital coin belonging to more than 100,000 customers, another $53 was tied up in disputes with third parties, investors reported at the time.

Robertson had testified that she conducted “repeated and diligent searches” for the password but came up empty. She went on to say she hired experts to attempt to decrypt the laptop, but they too failed. One expert profiled Cotten in an attempt to hack the computer, but that attempt also came to nothing.

Questionable Circumstances

On Tuesday, The New York Times reported that the amount exchange clients were unable to access is now calculated to be $250 million. Meanwhile, law enforcement officials in both Canada—where QuadrigaCX is located—and in the United States are investigating potential wrongdoing, and investors are clamoring for proof Cotten is actually dead.

Lawyers representing exchange clients on Friday asked Canadian law enforcement officials to exhume his body and conduct an autopsy “to confirm both its identity and the cause of death,” the NYT said. The letter cited “the questionable circumstances surrounding Mr. Cotten’s death and the significant losses” suffered in the incident. The letter went on to ask that the exhumation and autopsy be completed no later than “spring of 2020, given decomposition concerns.”

Quadriga didn’t disclose Cotten’s death until January 14, in a Facebook post, more than a month after it was said to have occurred. The QuadrigaCX platform went down on January 28, leaving users with no way to withdraw funds they had deposited with the exchange. Clients have taken to social media ever since to claim the death and loss of the password were staged in an attempt to abscond with their digital coin.

Besides an investigation by the Supreme Court of Nova Scotia, the FBI is also conducting an investigation into the company in conjunction with the IRS, the US Attorney for the District of Columbia, and the Justice Department’s Computer Crime and Intellectual Property Section.

One of the investigations have already unearthed circumstances that some may find suspicious. According the NYT, a report from Ernst & Young (an auditing firm hired by the Supreme Court of Nova Scotia), QuadrigaCX didn’t appear to have any “basic corporate records,” including accounting records. More concerning, the report said the exchange had transferred “significant volumes of cryptocurrency” into personal accounts held by Cotten on other exchanges. The report also documented the transfer of “substantial funds” to Cotten personally that had no clear business justification.

How the exhumation and autopsy would lead to the recovery of the missing cryptocurrency is not clear. But they might go a long way to confirming or debunking the claims Cotten died at the time and in the manner disclosed to QuadrigaCX customers.

QuadrigaCX and the case of the missing $250 million is the kind of event that would be unthinkable for most financial institutions. In the frothy and largely unregulated world of cryptocurrencies, such debacles are a regular if not frequent occurrence.

iPhones and iPads finally get key-based protection against account takeovers

iPhones and iPads finally get key-based protection against account takeovers

For the past couple of years, iPhone and iPad users have been relegated to second-class citizenship when it comes to a cross-industry protocol that promises to bring effective multi-factor authentication to the masses. While Android, Windows, Mac, and Linux users had an easy way to use the fledgling standard when logging in to Google, GitHub, and dozens of other sites, the process on iPhones and iPads was either painful or non-existent.

Apple’s reticence wasn’t just bad for iPhone and iPad users looking for the most effective way to thwart the growing scourge of account takeovers. The hesitation was bad for everyone else, too. With one of the most important computing platforms giving the cold shoulder to WebAuthn, the fledgling standard had little chance of gaining critical mass.

And that was unfortunate. WebAuthn and its U2F predecessor are arguably the most effective protection against the growing rash of account takeovers. They require a person logging in with a password to also present a pre-enrolled fingerprint, facial scan, or physical security key. The setup makes most existing types of account takeovers impossible, since they typically rely solely on theft of a password.

Developed by the cross-industry FIDO alliance and adopted by the World Wide Web consortium in March, WebAuthn has no shortage of supporters. It has native support in Windows, Android, Chrome, Firefox, Opera, and Brave. Despite the support, WebAuthn has gained little more than niche status to date, in part because of the lack of support from the industry’s most important platform.

Now, the standard finally has the potential to blossom into the ubiquitous technology many have hoped it would become. That’s because of last week’s release of iOS and iPadOS 13.3, which provide native support for the standard for the first time.

More about that later. First, a timeline of WebAuthn and some background.

In the beginning

The handheld security keys at the heart of the U2F standard helped prepare the world for a new, superior form of MFA. When plugged into a USB slot or slid over an NFC reader, the security key transmitted “cryptographic assertions” that were unique to that key. Unlike the one-time passwords used by MFA authenticator apps, the assertions transmitted by these keys couldn’t be copied or phished or replayed.

U2F-based authentication was also more secure than one-time passwords because, unlike the authenticator apps running on phones, the security keys couldn’t be hacked. It was also more reliable since keys didn’t need to access an Internet connection. A two-year study of more than 50,000 Google employees a few years ago concluded that cryptographically based Security Keys beat out smartphones and most other forms of two-factor verification.

U2F, in turn, gave way to WebAuthn. The new standard still allows cryptographic keys that connect by USB or NFC. It also allows users to provide an additional factor of authentication using fingerprint readers or facial scanners built into smartphones, laptops, and other types of hardware the user already owns.

A plethora of app, OS, and site developers soon built WebAuthn into their authentication flows. The result: even when a password was exposed through user error or a database breach, accounts remained protected unless a hacker with the password passed the very high bar of also obtaining the key, fingerprint, or facial scan.

As Google, Microsoft, key maker Yubico, and other WebAuthn partners threw their support behind the new protocol, Apple remained firmly on the sidelines. The lack of support in macOS wasn’t ideal, but third-party support from the Chrome and Firefox browsers still gave users an easy way to use security keys. Apple’s inaction was much more problematic for iPhone and iPad users. Not only did the company provide no native support for the standard, it was also slow to allow access to near-field communication, a wireless communication channel that makes it easy for security keys to communicate with iPhones.

Poor usability and questionable security

Initially, the only way iPhones and iPads could use WebAuthn was with a Bluetooth-enabled dongle like Google’s Titan security key. It worked—technically—but it came with deal-breaking limitations. For one, it worked solely with Google properties. So much for a ubiquitous standard. Another dealbreaker—for most people, anyway—the installation of a special app and the process of pairing the keys to an iPhone or iPad was cumbersome at best.

Then in May, Google disclosed a vulnerability in the Bluetooth Titan. That vulnerability made it possible for nearby hackers to obtain the authentication signal as it was transmitted to an iPhone or other device. The resulting recall confirmed many security professionals’ belief that Bluetooth lacked the security needed for MFA and other sensitive functions. The difficulty of using Bluetooth-based dongles, combined with the perception that they were less secure, made them a non-starter for most users.

In September, engineers from authentication key-maker Yubikey built a developer kit that added third-party programming interfaces for WebAuthn. The effort was valiant, but it was also kludgey, so much so that the fledgling Brave browser was the only one to make use of it. Even worse, Apple’s steadfast resistance to opening up third-party access to NFC meant that the third-party support was limited to physical security keys that connected through the Lightning port or Bluetooth.

NFC connections and biometrics weren’t available. Worst of all, the support didn’t work with Google, Facebook, Twitter, and most other big sites.

iDevices finally get key-based protection against account takeovers

iDevices finally get key-based protection against account takeovers

For the past couple of years, iPhone and iPad users have been relegated second-class citizens when it comes to a cross-industry protocol that promises to bring effective multi-factor authentication to the masses. While Android, Windows, Mac, and Linux users had an easy way to use the fledgling standard when logging in to Google, GitHub, and dozens of other sites, the process on iPhones and iPads was either painful or non-existent.

Apple’s reticence wasn’t just bad for iPhone and iPad users looking for the most effective way to thwart the growing scourge of account takeovers. The hesitation was bad for everyone else, too. With one of the most important computing platforms giving the cold shoulder to WebAuthn, the fledgling standard had little chance of gaining critical mass.

And that was unfortunate. WebAuthn and its U2F predecessor are arguably the most effective protection against the growing rash of account takeovers. They require a person logging in with a password to also present a pre-enrolled fingerprint, facial scan, or physical security key. The setup makes most existing types of account takeovers impossible, since they typically rely solely on theft of a password.

Developed by the cross-industry FIDO alliance and adopted by the World Wide Web consortium in March, WebAuthn has no shortage of supporters. It has native support in Windows, Android, Chrome, Firefox, Opera, and Brave. Despite the support, WebAuthn has gained little more than niche status to date, in part because of the lack of support from the industry’s most important platform.

Now, the standard finally has the potential to blossom into the ubiquitous technology many have hoped it would become. That’s because of last week’s release of iOS and iPadOS 13.3, which provide native support for the standard for the first time.

More about that later. First, a timeline of WebAuthn and some background.

In the beginning

The handheld security keys at the heart of the U2F standard helped prepare the world for a new, superior form of MFA. When plugged into a USB slot or slid over an NFC reader, the security key transmitted “cryptographic assertions” that were unique to that key. Unlike the one-time passwords used by MFA authenticator apps, the assertions transmitted by these keys couldn’t be copied or phished or replayed.

U2F-based authentication was also more secure than one-time passwords because, unlike the authenticator apps running on phones, the security keys couldn’t be hacked. It was also more reliable since keys didn’t need to access an Internet connection. A two-year study of more than 50,000 Google employees a few years ago concluded that cryptographically based Security Keys beat out smartphones and most other forms of two-factor verification.

U2F, in turn, gave way to WebAuthn. The new standard still allows cryptographic keys that connect by USB or NFC. It also allows users to provide an additional factor of authentication using fingerprint readers or facial scanners built into smartphones, laptops, and other types of hardware the user already owns.

A plethora of app, OS, and site developers soon built WebAuthn into their authentication flows. The result: even when a password was exposed through user error or a database breach, accounts remained protected unless a hacker with the password passed the very high bar of also obtaining the key, fingerprint, or facial scan.

As Google, Microsoft, key maker Yubico, and other WebAuthn partners threw their support behind the new protocol, Apple remained firmly on the sidelines. The lack of support in macOS wasn’t ideal, but third-party support from the Chrome and Firefox browsers still gave users an easy way to use security keys. Apple’s inaction was much more problematic for iPhone and iPad users. Not only did the company provide no native support for the standard, it was also slow to allow access to near-field communication, a wireless communication channel that makes it easy for security keys to communicate with iPhones.

Poor usability and questionable security

Initially, the only way iPhones and iPads could use WebAuthn was with a Bluetooth-enabled dongle like Google’s Titan security key. It worked—technically—but it came with deal-breaking limitations. For one, it worked solely with Google properties. So much for a ubiquitous standard. Another dealbreaker—for most people, anyway—the installation of a special app and the process of pairing the keys to an iPhone or iPad was cumbersome at best.

Then in May, Google disclosed a vulnerability in the Bluetooth Titan. That vulnerability made it possible for nearby hackers to obtain the authentication signal as it was transmitted to an iPhone or other device. The resulting recall confirmed many security professionals’ belief that Bluetooth lacked the security needed for MFA and other sensitive functions. The difficulty of using Bluetooth-based dongles, combined with the perception that they were less secure, made them a non-starter for most users.

In September, engineers from authentication key-maker Yubikey built a developer kit that added third-party programming interfaces for WebAuthn. The effort was valiant, but it was also kludgey, so much so that the fledgling Brave browser was the only one to make use of it. Even worse, Apple’s steadfast resistance to opening up third-party access to NFC meant that the third-party support was limited to physical security keys that connected through the Lightning port or Bluetooth.

NFC connections and biometrics weren’t available. Worst of all, the support didn’t work with Google, Facebook, Twitter, and most other big sites.