Policy

COVID data manager investigated, raided for using publicly available password

The words

Enlarge / Florida’s apparently being a little too welcoming at the moment.

Florida police said a raid they conducted Monday on the Tallahassee home of Rebekah Jones, a data scientist the state fired from her job in May, was part of an investigation into an unauthorized access of a state emergency-responder system. It turns out, however, that not only do all state employees with access to that system share a single username and password, but also those credentials are publicly available on the Internet for anyone to read.

The background

Jones on Monday shared a video of the police raid on her house as part of a Twitter thread in which she explained the police were serving a search warrant on her house following a complaint from the Department of Health. That complaint, in turn, was related to a message sent to Florida emergency responders back in November.

About 1,700 members of Florida’s emergency-response team received the communication on November 10, according to the affidavit (PDF) cited in the search warrant for Jones’ home. The message urged recipients to “speak up before another 17,000 people are dead. You know this is wrong. You don’t have to be a part of this. Be a hero. Speak out before it’s too late.”

That unauthorized message was sent to the contact list for Florida’s Emergency Support Function 8, or ESF-8, one of 18 groups of Florida state emergency-response personnel. ESF-8 is headed under the Florida Department of Health and coordinates public health response, including “triage, treatment, and transportation” across multiple agencies. All users in the group share the same username and password, the affidavit confirms. Investigators looked at system logs and identified an IPv6 address associated with the message, which they then determined to be connected to Jones’ house.

After the raid on her home, Jones gave multiple media interviews in which she repeatedly denied having anything to do with the message. To CNN, for example, she said, “I’m not a hacker,” and added that neither the tone nor the content of the message matches her communication style.

(In)security

In November, when the message went out, state DOH spokesman Jason Mahon declined to answer the Tampa Bay Times’ questions about “what, if anything, had been done to better secure the emergency alert system against future hacks, nor whether there have been other instances where the system had been hacked.”

It now seems the Times’ question may have gone unanswered because the Florida Department of Health had no answer, other than to continue bad security practices.

“All users assigned to [ESF-8 tools] share the same username and password,” the affidavit cited in the search warrant confirmed. That set of login credentials apparently does not change when users resign or are fired; instead, “once [employees] are no longer associated with ESF8 they are no longer authorized to access the multi-user group.”

That set of account credentials that all users share is part of a logistics operation manual that is publicly searchable and accessible on the Florida DOH’s website.

A redacted screenshot from a publicly available PDF showing the login information for ESF-8 communications systems. This is the kind of information you might tack up in your cubicle—not the kind of information you want all over the Internet.

Enlarge / A redacted screenshot from a publicly available PDF showing the login information for ESF-8 communications systems. This is the kind of information you might tack up in your cubicle—not the kind of information you want all over the Internet.

A link to the manual was shared in a Reddit thread discussing the raid on Jones’ house, which multiple Ars readers flagged to us. (Thanks!) We are choosing not to share a direct link, but as of publication time, the link was still live and working.

The document is a guideline for ESF-8 logistics staff. The first section includes a list of tasks management needs to complete in certain given periods. The second section includes a list of systems log-in information along with points of contact for each of those systems if they should be needed. It’s the kind of information anyone who has worked in an administrative or support role for any organization has likely had on hand—for internal use only.

Ars contacted the Florida Department of Health about the document prior to publication; officials did not immediately provide a response. We will update this story if we receive additional comment.

Additional reporting contributed by Timothy Lee.

Zoom lied to users about end-to-end encryption for years, FTC says

Zoom founder Eric Yuan speaking at Nasdaq.

Enlarge / Zoom founder and CEO Eric Yuan speaks before the Nasdaq opening bell ceremony on April 18, 2019, in New York City as the company announced its IPO.

Zoom has agreed to upgrade its security practices in a tentative settlement with the Federal Trade Commission, which alleges that Zoom lied to users for years by claiming it offered end-to-end encryption.

“[S]ince at least 2016, Zoom misled users by touting that it offered ‘end-to-end, 256-bit encryption’ to secure users’ communications, when in fact it provided a lower level of security,” the FTC said today in the announcement of its complaint against Zoom and the tentative settlement. Despite promising end-to-end encryption, the FTC said that “Zoom maintained the cryptographic keys that could allow Zoom to access the content of its customers’ meetings, and secured its Zoom Meetings, in part, with a lower level of encryption than promised.”

The FTC complaint says that Zoom claimed it offers end-to-end encryption in its June 2016 and July 2017 HIPAA compliance guides, which were intended for health-care industry users of the video conferencing service. Zoom also claimed it offered end-to-end encryption in a January 2019 white paper, in an April 2017 blog post, and in direct responses to inquiries from customers and potential customers, the complaint said.

“In fact, Zoom did not provide end-to-end encryption for any Zoom Meeting that was conducted outside of Zoom’s ‘Connecter’ product (which are hosted on a customer’s own servers), because Zoom’s servers—including some located in China—maintain the cryptographic keys that would allow Zoom to access the content of its customers’ Zoom Meetings,” the FTC complaint said.

The FTC announcement said that Zoom also “misled some users who wanted to store recorded meetings on the company’s cloud storage by falsely claiming that those meetings were encrypted immediately after the meeting ended. Instead, some recordings allegedly were stored unencrypted for up to 60 days on Zoom’s servers before being transferred to its secure cloud storage.”

To settle the allegations, “Zoom has agreed to a requirement to establish and implement a comprehensive security program, a prohibition on privacy and security misrepresentations, and other detailed and specific relief to protect its user base, which has skyrocketed from 10 million in December 2019 to 300 million in April 2020 during the COVID-19 pandemic,” the FTC said. (The 10 million and 300 million figures refer to the number of daily participants in Zoom meetings.)

No compensation for affected users

The settlement is supported by the FTC’s Republican majority, but Democrats on the commission objected because the agreement doesn’t provide compensation to users.

“Today, the Federal Trade Commission has voted to propose a settlement with Zoom that follows an unfortunate FTC formula,” FTC Democratic Commissioner Rohit Chopra said. “The settlement provides no help for affected users. It does nothing for small businesses that relied on Zoom’s data protection claims. And it does not require Zoom to pay a dime. The Commission must change course.”

Under the settlement, “Zoom is not required to offer redress, refunds, or even notice to its customers that material claims regarding the security of its services were false,” Democratic Commissioner Rebecca Kelly Slaughter said. “This failure of the proposed settlement does a disservice to Zoom’s customers, and substantially limits the deterrence value of the case.” While the settlement imposes security obligations, Slaughter said it includes no requirements that directly protect user privacy.

Zoom is separately facing lawsuits from investors and consumers that could eventually lead to financial settlements.

The Zoom/FTC settlement doesn’t actually mandate end-to-end encryption, but Zoom last month announced it is rolling out end-to-end encryption in a technical preview to get feedback from users. The settlement does require Zoom to implement measures “(a) requiring Users to secure their accounts with strong, unique passwords; (b) using automated tools to identify non-human login attempts; (c) rate-limiting login attempts to minimize the risk of a brute force attack; and (d) implementing password resets for known compromised Credentials.”

FTC calls ZoomOpener unfair and deceptive

The FTC complaint and settlement also cover Zoom’s controversial deployment of the ZoomOpener Web server that bypassed Apple security protocols on Mac computers. Zoom “secretly installed” the software as part of an update to Zoom for Mac in July 2018, the FTC said.

“The ZoomOpener Web server allowed Zoom to automatically launch and join a user to a meeting by bypassing an Apple Safari browser safeguard that protected users from a common type of malware,” the FTC said. “Without the ZoomOpener Web server, the Safari browser would have provided users with a warning box, prior to launching the Zoom app, that asked users if they wanted to launch the app.”

The software “increased users’ risk of remote video surveillance by strangers” and “remained on users’ computers even after they deleted the Zoom app, and would automatically reinstall the Zoom app—without any user action—in certain circumstances,” the FTC said. The FTC alleged that Zoom’s deployment of the software without adequate notice or user consent violated US law banning unfair and deceptive business practices.

Amid controversy in July 2019, Zoom issued an update to completely remove the Web server from its Mac application, as we reported at the time.

Zoom agrees to security monitoring

The proposed settlement is subject to public comment for 30 days, after which the FTC will vote on whether to make it final. The 30-day comment period will begin once the settlement is published in the Federal Register. The FTC case and the relevant documents can be viewed here.

The FTC announcement said Zoom agreed to take the following steps:

  • Assess and document on an annual basis any potential internal and external security risks and develop ways to safeguard against such risks;
  • Implement a vulnerability management program; and
  • Deploy safeguards such as multi-factor authentication to protect against unauthorized access to its network; institute data deletion controls; and take steps to prevent the use of known compromised user credentials.

The data deletion part of the settlement requires that all copies of data identified for deletion be deleted within 31 days.

Zoom will have to notify the FTC of any data breaches and will be prohibited “from making misrepresentations about its privacy and security practices, including about how it collects, uses, maintains, or discloses personal information; its security features; and the extent to which users can control the privacy or security of their personal information,” the FTC announcement said.

Zoom will have to review all software updates for security flaws and make sure that updates don’t hamper third-party security features. The company will also have to get third-party assessments of its security program once the settlement is finalized and once every two years after that. That requirement lasts for 20 years.

Zoom issued the following statement about today’s settlement:

The security of our users is a top priority for Zoom. We take seriously the trust our users place in us every day, particularly as they rely on us to keep them connected through this unprecedented global crisis, and we continuously improve our security and privacy programs. We are proud of the advancements we have made to our platform, and we have already addressed the issues identified by the FTC. Today’s resolution with the FTC is in keeping with our commitment to innovating and enhancing our product as we deliver a secure video communications experience.

Study shows which messengers leak your data, drain your battery, and more

Stock photo of man using smartphone.

Link previews are a ubiquitous feature found in just about every chat and messaging app, and with good reason. They make online conversations easier by providing images and text associated with the file that’s being linked.

Unfortunately, they can also leak our sensitive data, consume our limited bandwidth, drain our batteries, and, in one case, expose links in chats that are supposed to be end-to-end encrypted. Among the worst offenders, according to research published on Monday, were messengers from Facebook, Instagram, LinkedIn, and Line. More about that shortly. First a brief discussion of previews.

When a sender includes a link in a message, the app will display the conversation along with text (usually a headline) and images that accompany the link. It usually looks something like this:

For this to happen, the app itself—or a proxy designated by the app—has to visit the link, open the file there, and survey what’s in it. This can open users to attacks. The most severe are those that can download malware. Other forms of malice might be forcing an app to download files so big they cause the app to crash, drain batteries, or consume limited amounts of bandwidth. And in the event the link leads to private materials—say, a tax return posted to a private OneDrive or DropBox account—the app server has an opportunity to view and store it indefinitely.

The researchers behind Monday’s report, Talal Haj Bakry and Tommy Mysk, found that Facebook Messenger and Instagram were the worst offenders. As the chart below shows, both apps download and copy a linked file in its entirety—even if it’s gigabytes in size. Again, this may be a concern if the file is something the users want to keep private.

Link Previews: Instagram servers download any link sent in Direct Messages even if it’s 2.6GB.

It’s also problematic because the apps can consume vast amounts of bandwidth and battery reserves. Both apps also run any JavaScript contained in the link. That’s a problem because users have no way of vetting the security of JavaScript and can’t expect messengers to have the same exploit protections modern browsers have.

Link Previews: How hackers can run any JavaScript code on Instagram servers.

Haj Bakry and Mysk reported their findings to Facebook, and the company said that both apps work as intended. LinkedIn performed only slightly better. Its only difference was that, rather than copying files of any size, it copied only the first 50 megabytes.

Meanwhile, when the Line app opens an encrypted message and finds a link, it appears to send the link to the Line server to generate a preview. “We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who’s sharing which links to whom,” Haj Bakry and Mysk wrote.

Discord, Google Hangouts, Slack, Twitter, and Zoom also copy files, but they cap the amount of data at anywhere from 15MB to 50MB. The chart below provides a comparison of each app in the study.

Talal Haj Bakry and Tommy Mysk

All in all, the study is good news because it shows that most messaging apps are doing things right. For instance, Signal, Threema, TikTok, and WeChat all give the users the option of receiving no link preview. For truly sensitive messages and users who want as much privacy as possible, this is the best setting. Even when previews are provided, these apps are using relatively safe means to render them.

Still, Monday’s post is a good reminder that private messages aren’t always, well, private.

“Whenever you’re building a new feature, always keep in mind what sort of privacy and security implications it may have, especially if this feature is going to be used by thousands or even millions of people around the world,” the researchers wrote. “Link previews are a nice feature that users generally benefit from, but here we’ve showcased the wide range of problems this feature can have when privacy and security concerns aren’t carefully considered.”

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

Undocumented backdoor that covertly takes snapshots found in kids’ smartwatch

A popular smartwatch designed exclusively for children contains an undocumented backdoor that makes it possible for someone to remotely capture camera snapshots, wiretap voice calls, and track locations in real time, a researcher said.

The X4 smartwatch is marketed by Xplora, a Norway-based seller of children’s watches. The device, which sells for about $200, runs on Android and offers a range of capabilities, including the ability to make and receive voice calls to parent-approved numbers and to send an SOS broadcast that alerts emergency contacts to the location of the watch. A separate app that runs on the smartphones of parents allows them to control how the watches are used and receive warnings when a child has strayed beyond a present geographic boundary.

But that’s not all

It turns out that the X4 contains something else: a backdoor that went undiscovered until some impressive digital sleuthing. The backdoor is activated by sending an encrypted text message. Harrison Sand, a researcher at Norwegian security company Mnemonic, said that commands exist for surreptitiously reporting the watch’s real-time location, taking a snapshot and sending it to an Xplora server, and making a phone call that transmits all sounds within earshot.

Sand also found that 19 of the apps that come pre-installed on the watch are developed by Qihoo 360, a security company and app maker located in China. A Qihoo 360 subsidiary, 360 Kids Guard, also jointly designed the X4 with Xplora and manufactures the watch hardware.

“I wouldn’t want that kind of functionality in a device produced by a company like that,” Sand said, referring to the backdoor and Qihoo 360.

In June, Qihoo 360 was placed on a US Commerce Department sanctions list. The rationale: ties to the Chinese government made the company likely to engage in “activities contrary to the national security or foreign policy interests of the United States.” Qihoo 360 declined to comment for this post.

Patch on the way

The existence of an undocumented backdoor in a watch from a country with known record for espionage hacks is concerning. At the same time, this particular backdoor has limited applicability. To make use of the functions, someone would need to know both the phone number assigned to the watch (it has a slot for a SIM card from a mobile phone carrier) and the unique encryption key hardwired into each device.

In a statement, Xplora said obtaining both the key and phone number for a given watch would be difficult. The company also said that even if the backdoor was activated, obtaining any collected data would be hard, too. The statement read:

We want to thank you for bringing a potential risk to our attention. Mnemonic is not providing any information beyond that they sent you the report. We take any potential security flaw extremely seriously.

It is important to note that the scenario the researchers created requires physical access to the X4 watch and specialized tools to secure the watch’s encryption key. It also requires the watch’s private phone number. The phone number for every Xplora watch is determined when it is activated by the parents with a carrier, so no one involved in the manufacturing process would have access to it to duplicate the scenario the researchers created.

As the researchers made clear, even if someone with physical access to the watch and the skill to send an encrypted SMS activates this potential flaw, the snapshot photo is only uploaded to Xplora’s server in Germany and is not accessible to third parties. The server is located in a highly-secure Amazon Web Services environment.

Only two Xplora employees have access to the secure database where customer information is stored and all access to that database is tracked and logged.

This issue the testers identified was based on a remote snapshot feature included in initial internal prototype watches for a potential feature that could be activated by parents after a child pushes an SOS emergency button. We removed the functionality for all commercial models due to privacy concerns. The researcher found some of the code was not completely eliminated from the firmware.

Since being alerted, we have developed a patch for the Xplora 4, which is not available for sale in the US, to address the issue and will push it out prior to 8:00 a.m. CET on October 9. We conducted an extensive audit since we were notified and have found no evidence of the security flaw being used outside of the Mnemonic testing.

The spokesman said the company has sold about 100,000 X4 smartwatches to date. The company is in the process of rolling out the X5. It’s not yet clear if it contains similar backdoor functionality.

Heroic measures

Sand discovered the backdoor through some impressive reverse engineering. He started with a modified USB cable that he soldered onto pins exposed on the back of the watch. Using an interface for updating the device firmware, he was able to download the existing firmware off the watch. This allowed him to inspect the insides of the watch, including the apps and other various code packages that were installed.

A modified USB cable attached to the back of an X4 watch.

Enlarge / A modified USB cable attached to the back of an X4 watch.
Mnemonic

One package that stood out was titled “Persistent Connection Service.” It starts as soon as the device is turned on and iterates through all the installed applications. As it queries each application, it builds a list of intents—or messaging frameworks—it can call to communicate with each app.

Sand’s suspicions were further aroused when he found intents with the following names:

  • WIRETAP_INCOMING
  • WIRETAP_BY_CALL_BACK
  • COMMAND_LOG_UPLOAD
  • REMOTE_SNAPSHOT
  • SEND_SMS_LOCATION

After more poking around, Sand figured out the intents were activated using SMS text messages that were encrypted with the hardwired key. System logs showed him that the key was stored on a flash chip, so he dumped the contents and obtained it—“#hml;Fy/sQ9z5MDI=$” (quotation marks not included). Reverse engineering also allowed the researcher to figure out the syntax required to activate the remote snapshot function.

“Sending the SMS triggered a picture to be taken on the watch, and it was immediately uploaded to Xplora’s server,” Sand wrote. “There was zero indication on the watch that a photo was taken. The screen remained off the entire time.”

Sand said he didn’t activate the functions for wiretapping or reporting locations, but with additional time, he said, he’s confident he could have.

As both Sand and Xplora note, exploiting this backdoor would be difficult, since it requires knowledge of both the unique factory-set encryption key and the phone number assigned to the watch. For that reason, there’s no reason for people who own a vulnerable device to panic.

Still, it’s not beyond the realm of possibility that the key could be obtained by someone with ties to the manufacturer. And while phone numbers aren’t usually published, they’re not exactly private, either.

The backdoor underscores the kinds of risks posed by the increasing number of everyday devices that run on firmware that can’t be independently inspected without the kinds of heroic measures employed by Sand. While the chances of this particular backdoor being used are low, people who own an X4 would do well to ensure their device installs the patch as soon as practical.

Hong Kong downloads of Signal surge as residents fear crackdown

Hong Kong downloads of Signal surge as residents fear crackdown

d3sign / Getty

The secure chat app Signal has become the most downloaded app in Hong Kong on both Apple’s and Google’s app stores, Bloomberg reports, citing data from App Annie. The surging interest in encrypted messaging comes days after the Chinese government in Beijing passed a new national security law that reduced Hong Kong’s autonomy and could undermine its traditionally strong protections for civil liberties.

The 1997 handover of Hong Kong from the United Kingdom to China came with a promise that China would respect Hong Kong’s autonomy for 50 years following the handover. Under the terms of that deal, Hong Kong residents should have continued to enjoy greater freedom than people on the mainland until 2047. But recently, the mainland government has appeared to renege on that deal.

Civil liberties advocates see the national security law approved last week as a major blow to freedom in Hong Kong. The New York Times reports that “the four major offenses in the law—separatism, subversion, terrorism and collusion with foreign countries—are ambiguously worded and give the authorities extensive power to target activists who criticize the party, activists say.” Until now, Hong Kongers faced trial in the city’s separate, independent judiciary. The new law opens the door for dissidents to be tried in mainland courts with less respect for civil liberties or due process.

This has driven heightened interest among Hong Kongers in secure communication technologies. Signal offers end-to-end encryption and is viewed by security experts as the gold standard for secure mobile messaging. It has been endorsed by NSA whistleblower Ed Snowden.

One of Signal’s selling points is that it minimizes data collection on its users. When rival Telegram announced it would no longer honor data requests from Hong Kong courts, Signal responded that it didn’t have any user data to hand over in the first place.

Bloomberg has also reported on the surging adoption of VPN software in Hong Kong as residents fear government surveillance of their Web browsing.

Researchers say online voting tech used in 5 states is fatally flawed

Voting machines are shown at a polling location on June 9, 2020 in West Columbia, South Carolina.

Enlarge / Voting machines are shown at a polling location on June 9, 2020 in West Columbia, South Carolina.
Sean Rayford/Getty Images

OmniBallot is election software that is used by dozens of jurisdictions in the United States. In addition to delivering ballots and helping voters mark them, it includes an option for online voting. At least three states—West Virginia, Delaware, and New Jersey—have used the technology or are planning to do so in an upcoming election. Four local jurisdictions in Oregon and Washington state use the online voting feature as well. But new research from a pair of computer scientists, MIT’s Michael Specter and the University of Michigan’s Alex Halderman, finds that the software has inadequate security protections, creating a serious risk to election integrity.

Democracy Live, the company behind OmniBallot, defended its software in an email response to Ars Technica. “The report did not find any technical vulnerabilities in OmniBallot,” wrote Democracy Live CEO Bryan Finney.

This is true in a sense—the researchers didn’t find any major bugs in the OmniBallot code. But it also misses the point of their analysis. The security of software not only depends on the software itself but also on the security of the environment on which the system runs. For example, it’s impossible to keep voting software secure if it runs on a computer infected with malware. And millions of PCs in the United States are infected with malware.

The issue has particular urgency right now because the ongoing COVID-19 pandemic is forcing election officials to make significant changes to election procedures. Right now, most jurisdictions using the OmniBallot software don’t use its “electronic ballot delivery” feature. But enabling the feature would require little more than a configuration change. There’s a risk that election officials, under pressure to make remote voting easier, will decide to enable the software’s online voting feature for this November’s general election.

How OmniBallot works

Experimenting with a live election system would be unethical and likely illegal. Instead, Specter and Halderman obtained a copy of the OmniBallot software, reverse-engineered it, and then created new server software that mimicked the behavior of the real server. This allowed them to experiment with the software without risking interference with a real election.

OmniBallot offers a number of different capabilities that state election officials have the option to offer to voters. The most basic is a blank ballot delivery feature that will provide a voter with a PDF ballot that can be printed out and mailed back to the polling place.

Jurisdictions can also offer a ballot-marking feature, which will mark a ballot on the voter’s behalf before it’s printed out. This can enable blind voters to fill out a ballot independently. It can also prevent overvotes (voting for two or more candidates) and warn voters about undervotes (failing to vote in a race).

But Specter and Halderman argue that this capability comes with some added risks. Malicious software could be programmed to switch votes some fraction of the time. Theoretically, voters are supposed to check that the votes are correct before mailing in their ballot, but research suggests voters are lax about doing so. One study by Halderman and others found that only 6.6 percent of voters in a realistic mock election reported a changed vote to election supervisors.

By default, the software generates the marked ballot PDF on an OmniBallot server, not on the user’s own device. This creates an unnecessary risk to the privacy of the voter’s ballot, Specter and Halderman argue, since it means that Democracy Live gets an unnecessary copy of the voter’s votes.

Fortunately, Democracy Live also offers an option for client-side ballot marking. Andrew Appel, a computer scientist at Princeton, told Ars that this option was added at the insistence of California officials who objected to server-side ballot marking. When this option is chosen by election administrators, the ballot is marked on the user’s own device, without sharing the data with Democracy Live’s servers. The computer scientists recommend that all jurisdictions using OmniBallot’s ballot marking feature switch to the client-side version of the software.

The problems with online voting

While there are some security concerns with ballot-marking software, the researchers say that these problems pale in comparison to security vulnerabilities of OmniBallot’s “electronic ballot delivery” system.

The fundamental problem is that the complexity and opacity of online voting systems creates numerous opportunities for a hacker to tamper with a ballot during the submission process. Malware on the client device could modify the ballot before it’s transmitted to Democracy Live’s servers. OmniBallot is built on Amazon Web Services using JavaScript libraries delivered by Google and Cloudflare. So hackers or malicious insiders at any of these companies could potentially alter ballots if they had access to one of these companies’ systems.

And the nature of online voting means there’s no reliable way for a voter to verify that a ballot was transmitted correctly. Software engineers have developed theoretical designs for voting systems with end-to-end security. These systems use sophisticated cryptography to enable voters to cryptographically verify that their vote has been counted correctly. But Democracy Live doesn’t do anything like that. In their paper, Specter and Halderman describe how an attacker could exploit the lack of end-to-end verification.

“The web app would show a ballot containing the selections the voter intended, but the ballot that got cast would have selections chosen by the attacker,” they write. “The attack would execute on the client, with no unusual interactions with Democracy Live, so there would be no way for the company (or election officials) to discover it.”

Auditing doesn’t fix the problem

Democracy Live conducts post-election audits using Amazon’s AWS CloudTrail software to verify that no Democracy Live employees abused their access to company servers. These checks could detect some forms of election tampering, but Specter and Halderman point out that they are far from foolproof.

These methods wouldn’t detect any attacks executed from the client side. If malware on a user’s PC modified the user’s ballot before sending it to Democracy Live’s servers, that wouldn’t show up in the CloudTrail logs. If someone with access to Google or Cloudflare servers delivered malicious JavaScript libraries to OmniBallot users, that wouldn’t show up in AWS logs. Someone with administrative access to Amazon’s servers might be able to modify Democracy Live’s software in a way that wouldn’t show up in the logs.

Of course, most of these attacks wouldn’t be trivial to pull off. Google, Amazon, and Cloudflare are three of the most sophisticated software companies in the world and take elaborate precautions to defend their systems. The audit I linked to above is from an election for the King County Conservation District. It’s farfetched that anyone would go to so much trouble to attack such a low-stakes election.

But sophisticated attacks would become far more plausible if the software were used to elect members of Congress and even the president. In that case, we can imagine foreign governments like Russia or China being willing to invest significant resources to compromise election results in a way that’s difficult to detect. We don’t know the full extent of these countries’ offensive capabilities, of course. But it’s reasonable to think that they’d be able to compromise OmniBallot’s software in ways that wouldn’t be revealed in a post-election audit.

To be fair to Democracy Live, the issues the researchers highlighted aren’t unique to the OmniBallot software. Rather, there’s an overwhelming consensus among computer security experts that Internet-based voting is a bad idea in general. Halderman and Specter cite a 2018 report from the National Academies of Sciences, Engineering, and Medicine that found that “no known technology guarantees the secrecy, security, and verifiability of a marked ballot transmitted over the Internet.”

Almost 8,000 could be affected by federal emergency loan data breach

Three people stand by a podium in front of the White House logo.

Enlarge / Small Business Administrator Jovita Carranza is flanked by Donald Trump and Secretary of Treasury Steve Mnuchin on April 2, 2020.

Almost 8,000 business owners who applied for a loan from the Small Business Administration may have had their personal information exposed to other applicants, the SBA admitted on Tuesday.

The breach relates to a long-standing SBA program called Economic Injury Disaster Loans (EIDL). It has traditionally been used to aid owners whose businesses are disrupted by hurricanes, tornadoes, or other disasters. It was recently expanded by Congress in the $2.2 trillion CARES Act. In addition to loans, the law authorized grants of up to $10,000 that don’t need to be paid back.

The EIDL program is separate from the larger Paycheck Protection Program that was also part of the CARES Act. The SBA says that PPP applicants were not affected by the breach.

A Trump administration official described the problem to CNBC:

The official said that in order to access other business owners’ information, small business applicants must have been in the loan application portal. If the user attempted to hit the page back button, he or she may have seen information that belonged to another business owner, not their own.

The SBA says it discovered the flaw on March 25 and notified affected users. One victim posted a copy last Friday of a paper letter she received about the breach. The letter stated that personally identifiable information—including Social Security numbers, addresses, dates of birth, and financial data—may have been exposed. The letter said that, as of last week, there was no sign yet of the data being misused.

The SBA says that it immediately disabled the portion of its website that was exposing applicant data, fixed the problem, and re-launched the website. Affected businesses have been offered a year of free credit monitoring.

Overwhelming demand

The SBA has struggled to deal with demand for EIDL loans. Before the coronavirus crisis, small businesses were supposed to be eligible for up to $2 million in disaster loans.

But with millions of firms seeking assistance, the SBA was forced to limit the loans to as little as $10,000. Despite the limits, the SBA website currently states that it is not accepting new applications due to a lack of funds.

As of April 19, SBA had approved almost 27,000 EIDL loans valued at $5.6 billion. Another 755,000 businesses received EIDL grants worth a total of $3.3 billion. The Trump administration official told CNBC that 4 million business owners had applied for assistance worth $383 billion—far more than the $17 billion allocated for the program.

The PPP has also seen overwhelming demand, with funding running out in a matter of days. A legislative compromise announced on Tuesday could replenish both programs, with the PPP getting another $320 billion and the EIDL getting $60 billion.