The largest collection of public internet censorship data ever compiled shows that even citizens of what are considered the world’s freest countries aren’t safe from internet censorship.
A team from the University of Michigan used its own Censored Planet tool, an automated censorship tracking system launched in 2018, to collect more than 21 billion measurements over 20 months in 221 countries.
“We hope that the continued publication of Censored Planet data will enable researchers to continuously monitor the deployment of network interference technologies, track policy changes in censoring nations, and better understand the targets of interference,” said Roya Ensafi, U-M assistant professor of electrical engineering and computer science who led the development of the tool.
Poland blocked human rights sites, India same-sex dating sites
Ensafi’s team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the world’s freest by Freedom House, a nonprofit that advocates for democracy and human rights.
They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan.
While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here.
“When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic,” she said. “The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.”
It’s already happening abroad, the researchers found.
“What we see from our study is that no country is completely free,” said Ram Sundara Raman, U-M doctoral candidate in computer science and engineering and first author of the study. “We’re seeing that many countries start with legislation that compels ISPs to block something that’s obviously bad like child pornography or pirated content.
“But once that blocking infrastructure is in place, governments can block any websites they choose, and it’s a very opaque process. That’s why censorship measurement is crucial, particularly continuous measurements that show trends over time.”
Norway, for example–tied with Finland and Sweden as the world’s freest country, according to Freedom House–passed laws requiring ISPs to block some gambling and pornography content beginning in early 2018.
Censored Planet, however, uncovered that ISPs in Norway are imposing what the study calls “extremely aggressive” blocking across a broader range of content, including human rights websites like Human Rights Watch and online dating sites like Match.com.
Similar tactics show up in other countries, often in the wake of large political events, social unrest or new laws. News sites like The Washington Post and The Wall Street Journal, for example, were aggressively blocked in Japan when Osaka hosted the G20 international economic summit in June 2019.
News, human rights and government sites saw a censorship spike in Poland after protests in July 2019, and same-sex dating sites were aggressively blocked in India after the country repealed laws against gay sex in September 2018.
Censored Planet releases technical details for researchers, activists
The researchers say the findings show the effectiveness of Censored Planet’s approach, which turns public internet servers into automated sentries that can monitor and report when access to websites is being blocked.
Running continuously, it takes billions of automated measurements and then uses a series of tools and filters to analyze the data and tease out trends.
The study also makes public technical details about the workings of Censored Planet that Raman says will make it easier for other researchers to draw insights from the project’s data, and help activists make more informed decisions about where to focus.
“It’s very important for people who work on circumvention to know exactly what’s being censored on which network and what method is being used,” Ensafi said. “That’s data that Censored Planet can provide, and tech experts can use it to devise circumventions.”
Censored Planet’s constant, automated monitoring is a departure from traditional approaches that rely on volunteers to collect data manually from inside countries.
Manual monitoring can be dangerous, as volunteers may face reprisals from governments. Its limited scope also means that efforts are often focused on countries already known for censorship, enabling nations that are perceived as freer to fly under the radar.
While censorship efforts generally start small, Raman says they could have big implications in a world that is increasingly dependent on the internet for essential communication needs.
“We imagine the internet as a global medium where anyone can access any resource, and it’s supposed to make communication easier, especially across international borders,” he said. “We find that if this continues, that won’t be true anymore. We fear this could lead to a future where every country has a completely different view of the internet.”
As we near 2021, it seems that the changes to our working life that came about in 2020 are set to remain. Businesses are transforming as companies continue to embrace remote working practices to adhere to government guidelines. What does the next year hold for organizations as they continue to adapt in the age of the Everywhere Enterprise?
We will see the rush to the cloud continue
The pandemic saw more companies than ever move to the cloud as they sought collaboration and productivity tools for employee bases working from home. We expect that surge to continue as more companies realize the importance of the cloud in 2021. Businesses are prepared to preserve these new working models in the long term, some perhaps permanently: Google urged employees to continue working from home until at least next July and Twitter stated employees can work from home forever if they prefer.
Workforces around the world need to continue using alternatives to physical face-to-face meetings and remote collaboration tools will help. Cloud-based tools are perfect for that kind of functionality, which is partly why many customers that are not in the cloud, want to be. The customers who already started the cloud migration journey are also moving more resources to public cloud infrastructure.
People will be the new perimeter
While people will eventually return to the office, they won’t do so full-time, and they won’t return in droves. This shift will close the circle on a long trend that has been building since the mid-2000s: the dissolution of the network perimeter. The network and the devices that defined its perimeter will become even less special from a cybersecurity standpoint.
Instead, people will become the new perimeter. Their identity will define what they’re allowed to access, both inside and outside the corporate network. Even when they are logged into the network, they will have minimal access to resources until they and the device they are using have been authenticated and authorized. This approach, known as zero trust networking, will pervade everything, covering not just employees, but customers, contractors, and other business partners.
User experience will be increasingly important in remote working
Happy, productive workers are even more important during a pandemic. Especially as on average, employees are working three hours longer since the pandemic started, disrupting the work-life balance. It’s up to employers to focus on the user experience and make workers’ lives as easy as possible.
When the COVID-19 lockdown began, companies coped by expanding their remote VPN usage. That got them through the immediate crisis, but it was far from ideal. On-premises VPN appliances suffered a capacity crunch as they struggled to scale, creating performance issues, and users found themselves dealing with cumbersome VPN clients and log-ins. It worked for a few months, but as employees settle in to continue working from home in 2021, IT departments must concentrate on building a better remote user experience.
Old-school remote access mechanisms will fade away
This focus on the user experience will change the way that people access computing resources. In the old model, companies used a full VPN to tunnel all traffic via the enterprise network. This introduced latency issues, especially when accessing applications in the cloud because it meant routing all traffic back through the enterprise data center.
It’s time to stop routing cloud sessions through the enterprise network. Instead, companies should allow remote workers to access them directly. That means either sanitizing traffic on the device itself or in the cloud.
User authentication improvements
Part of that new approach to authentication involves better user verification. That will come in two parts. First, it’s time to ditch the password. The cybersecurity community has advocated this for a long time, but the work-from-home trend will accelerate it. Employees accessing from mobile devices are increasingly using biometric authentication, which is more secure and convenient.
The second improvement to user verification will see people logging into applications less often. Sessions will persist for longer, based on deep agent-based device knowledge that will form a big part of the remote access experience.
Changing customer interactions will require better mobile security
It isn’t just employees who will need better mobile security. Businesses will change the way that they interact with customers too. We can expect fewer person-to-person interactions in retail as social distancing rules continue. Instead, contact-free transactions will become more important and businesses will move to self-checkout options. Retailers must focus more on mobile devices for everything from browsing products, to ordering and payment.
The increase in QR codes presents a great threat
Retailers and other companies are already starting and will continue to use QR codes more and more to bridge contact with things like menus and payment systems, as well as comply with social distance rules. Users can scan them from two meters away, making them perfect for payments and product information.
The problem is that they were never designed for these applications or digital authentication and can easily be replaced with malicious codes that manipulate smartphones in unexpected and damaging ways. We can expect to see QR code fraud problems increase as the usage of these codes expands in 2021.
The age of the Everywhere Enterprise
One overarching message came through clearly in our conversations with customers: the enterprise changed for the longer term in 2020, and this will have profound effects in 2021. What began as a rushed reaction during a crisis this year will evolve during the next as the IT department joins HR in rethinking employee relationships in the age of the everywhere enterprise.
If 2020 was the year that businesses fell back on the ropes, 2021 will be the one where they bounce forward, moving from a rushed reaction into a thoughtful, measured response.
76% of Americans believe they’ve encountered disinformation firsthand and 20% say they’ve shared information later shown to be incorrect or intentionally misleading, according to a research released by NortonLifeLock.
Disinformation, or false information intended to mislead or deceive people, is commonly spread by social media users and bots – automated accounts controlled by software – with the intent to sow division among people, create confusion, and undermine confidence in the news surrounding major current events, such as the 2020 U.S. presidential election, COVID-19 and social justice movements.
“Disinformation campaigns can spread like wildfire on social media and have a long-lasting impact, as people’s opinions and actions may be influenced by the false or misleading information being circulated.”
Fact-checking stop the spread of disinformation
No matter who or what posts the information, fact-checking is a best practice for consumers to help stop the spread of disinformation. According to the online survey of more than 2,000 US adults, 53% of Americans often question whether information they see on social media is disinformation or fact.
86% of Americans agree that disinformation has the ability to greatly influence someone’s opinion, but 58% acknowledge that disinformation could influence them.
Although 82% of Americans are very concerned about the spread of disinformation, 21% still say social media companies do not have the right to remove it from their platform, with Republicans being almost twice as likely as Democrats to feel this way (25% vs. 13%).
“From disinformation campaigns to deepfakes, it’s becoming increasingly difficult for people to tell real from fake online,” added Kats. “It’s important to maintain a healthy dose of skepticism and to fact check multiple sources – especially before sharing something – to help avoid spreading disinformation.”
- More than a third of Americans don’t know the true purpose of disinformation. Only 62% of Americans know that disinformation is created to cause a divide or rift between people; 72% of both Republicans and Democrats believe disinformation is created for political gain.
- 79% of Americans believe social media companies have an obligation to remove disinformation from their platforms, with the majority of Democrats (87%), Republicans (75%) and Independents (75%) supporting this.
- Democrats and Republicans disagree on who spreads disinformation the most, with Republicans most commonly stating news media outlets are most likely to spread disinformation (36%), and Democrats stating it’s U.S. politicians (28%).
- Disinformation has taken a toll on relationships, with many Americans having argued with someone (36%), unfriended/unfollowed someone on social media (30%), or taken a break from social media altogether (28%) because of disinformation.
Since the middle of the 20th century, commercial advertising and marketing techniques have made their way into the sphere of political campaigns. The tactics associated with surveillance capitalism – the commodification of personal data for profit as mastered by companies like Google and Facebook – have followed the same path.
The race between competing political campaigns to out-collect, out-analyze and out-leverage voter data has raised concerns about the damaging effects it has on privacy and democratic participation, but also about the fact that all of this data, if seized by adversarial nation-states, opens up opportunities for affecting an election and sowing electoral chaos.
Let’s start by looking at the information available to political campaigns. Typically, everything begins and ends with the voter file, which is a compendium of information that’s rooted in public data about an individual voter, including their party affiliation and voting frequency. The goal for political operatives is to continually enrich this information and to do so better and faster than their political rivals.
Campaign field workers add to voter files with written notes reflecting conversations with and observations of actual voters. But the real magic happens when this data is augmented with other datasets that are purchased directly from a data broker or shared from outside political groups through the national party’s data exchange.
Consumer information supplied by data brokers typically draws from voters’ digital activities (such as smartphone app activity) as well as offline activities (like credit card purchases), often presenting hundreds of attributes. In addition to data on things like income and occupation, additional datapoints enable campaigns to infer a variety of lifestyle preferences and attitudes.
Within this category of consumer information, voters’ location histories have an outsized value to campaigns. For monetization purposes, many popular smartphone apps, with users’ permission, track their locations and then make this data available to data brokers or advertisers. This location data can reveal extremely private information, including where an individual lives and how often they attend religious services. Though the data is meant to be anonymous, companies can tie the data to an individual’s identity by matching their smartphone’s advertising ID number or their presumed home address with other information.
In addition to purchased data, presidential campaigns have another tool for getting information directly from supporters: the campaign app. These apps allow candidates to speak directly to voters and are intended to increase engagement through gamification or other means. But perhaps the more important driver is that these apps can serve as a huge source of data. The Trump 2020 app, for example, makes extensive permission requests, including for access to a smartphone’s identity and Bluetooth. The app can potentially sniff out much of the information on a user’s device, including their app usage.
With this trove of data at their disposal, the next step for campaigns is to combine the various datasets together into a single voter list, matching specific voters to the commercial data provided. The data is then run through custom-built models, the end result of which is that voters are put into granular segments and scored on certain issues.
Armed with these insights, campaigns can then find the voters they need to target, including voters who are potentially receptive but currently disengaged and voters who previously supported the candidate or party but have lost enthusiasm. Campaigns can also use their data learnings to boost turnout among decided voters, to register unregistered voters and even to suppress support for the opposition candidate.
But despite the value of this data to campaigns, securing it isn’t always a priority. The reality is that political campaigns are fast-moving operations where the focus is on reaching voters and raising money, not cybersecurity. As just one example of this poor data stewardship, close to 15 million records on Texas voters were found on an exposed and unsecured server just months before the 2018 midterm elections.
If another country were looking to meddle in our elections, such data could potentially be stolen and then weaponized in ways that could tip the scales for one preferred candidate or simply undermine democratic principles.
Some scenarios include:
- The adversarial country dumps the stolen voter data online, creating a liability for the campaign from which the data was stolen (or at the very least, creating a distraction from the campaign’s messaging).
- In an attempt to silence the opposing campaign’s high-profile supporters, the adversary doxes them using embarrassing or intensely private details gleaned from the stolen data.
- The adversary spoofs the opposing campaign through text message, sharing disinformation about the candidate or the voting process directly to the candidate’s cadre of supporters.
- Using a political action committee as a front, the adversary sets up a massive digital advertising scheme microtargeted to the opposition candidate’s softer supporters with messages designed to chip away at their enthusiasm for voting.
- Leveraging psychometric insights from the stolen data, the adversary finds the opposing campaign’s ardent supporters who may be most susceptible to manipulation and then, posing as the campaign, lures the supporters into actions designed to make the campaign seem guilty by association once publicized.
In retrospect, the harvesting of data popularly associated with Cambridge Analytica wasn’t an aberration so much as it was a harbinger of the digital arms race to come in electoral politics, a race to gather as much information about citizens’ locations, habits and beliefs as possible for the purposes of better informing campaign strategies and delivering optimized messaging to individual voters.
In the absence of a national data privacy law or stricter campaign data regulations, there’s very little that any one of us can do, short of living off the grid, to prevent our personal data from being fodder for campaigns and threat actors alike. In the meantime, you may choose to reward the candidates who most respect your data and your privacy by giving them your vote.
It’s safe to assume that we need to protect presidential election data, since it’s one of the most critical sets of information available. Not only does it ensure the legitimacy of elections and the democratic process, but also may contain personal information about voters. Given its value and sensitivity, it only makes sense that this data would be a target for cybercriminals looking for some notoriety – or a big ransom payment.
In 2016, more needed to be done to protect the election and its data from foreign interference and corruption. This year, both stringent cybersecurity and backup and recovery protocols should be implemented in anticipation of sophisticated foreign interference.
Cybersecurity professionals in government and the public sector should look to the corporate world and mimic – and if possible improve upon – the policies and procedures being applied to keep data safe. Particularly as voting systems become more digitized, the likelihood of IT issues increases, so it’s essential to have a data protection plan in place to account for these challenges and emerging cyber threats.
The risk of ransomware in 2020
Four years ago, ransomware attacks impacting election data were significantly less threatening. Today, however, the thought of cybercriminals holding election data hostage in exchange for a record-breaking sum of money sounds entirely plausible. A recent attack on Tyler Technologies, a software provider for local governments across the US, highlighted the concerns held across the nation and left many to wonder if the software providers in charge of presidential election data might suffer a similar fate.
Regardless of whether data is recoverable, ransomware attacks typically cause IT downtime as security teams attempt to prevent the attack from spreading. While this is the best practice to follow to contain the malware, the impacts of system downtime on the day of the election could be catastrophic. To combat this, government officials should look for solutions that offer continuous availability technology.
The best defense also integrates cybersecurity and data protection, as removing segmentation streamlines the process of detecting and responding to attacks, while simultaneously recovering systems and data. This will simplify the process for stressed-out government IT teams already tasked with dealing with the chaos of election day.
Developing a plan to protect the presidential election
While ransomware is a key concern, it isn’t the only threat that election data faces. The 2016 election revealed to what degree party election data could be interfered with. Now that we know the risks, we also know that focusing solely on cybersecurity without a backup plan in place isn’t enough to keep this critical data secure.
The first step to any successful data protection plan is a robust backup strategy. Since the databases or cloud platforms that compile voter data are likely to be big targets, government security pros should store copies of that data in multiple locations to reduce the chance that one attack takes down an entire system. Ideally, they should follow the 3-2-1 rule by keeping three copies of data, in two locations, with one offsite or in the cloud.
It’s also important to protect these backups with the same level of care as you would critical IT infrastructure. Backups are only helpful if they’re clean and easily accessible – particularly for a time-sensitive situation like the presidential election, it’s important to be able to recover backed-up data as quickly as possible. The last thing government officials need is missing or inaccessible votes on election day.
The need to protect this data doesn’t end when voting does, however. Government IT pros also must consider implementing a strategy for protecting stored voter data long-term. Compliance with data privacy regulations surrounding voter data is key to maintaining a fair democratic process, so they should make sure to consider any local regulations that may dictate how this data is stored and accessed. Protection that extends after the election will also be important for safeguarding against cyberattacks that might target this data down the line.
Not only could cyberattacks hold voter data hostage, they may also affect how quickly the results of the election can be determined. Voter data that is lost altogether might cause an entire election to be called a fraud. This would have a far-reaching impact on people across America, and our democratic process as a whole. Luckily, this is avoidable with a data protection and ransomware response plan that gets government officials prepared for when an attack happens.
2020 presented us with many surprises, but the world of data privacy somewhat bucked the trend. Many industry verticals suffered losses, uncertainty and closures, but the protection of individuals and their information continued to truck on.
After many websites simply blocked access unless you accepted their cookies (now deemed unlawful), we received clarity on cookies from the European Data Protection Board (EDPB). With the ending of Privacy Shield, we witnessed the cessation of a legal basis for cross border data transfers.
Severe fines levied for General Data Protection Regulation (GDPR) non-compliance showed organizations that the regulation is far from toothless and that data protection authorities are not easing up just because there is an ongoing global pandemic.
What can we expect in 2021? Undoubtedly, the number of data privacy cases brought before the courts will continue to rise. That’s not necessarily a bad thing: with each case comes additional clarity and precedent on many different areas of the regulation that, to date, is open to interpretation and conjecture.
Last time I spoke to the UK Information Commissioner’s Office regarding a technicality surrounding data subject access requests (DSARs) submitted by a representative, I was told that I was far from the only person enquiring about it, and this only illustrates some of the ambiguities faced by those responsible for implementing and maintaining compliance.
Of course, this is just the GDPR. There are many other data privacy legislative frameworks to consider. We fully expect 2021 to bring full and complete alignment of the ePrivacy Regulations with GDPR, and eradicate the conflict that exists today, particularly around consent, soft opt-in, etc., where the GDPR is very clear but the current Privacy and Electronic Communication Regulation (PECR) not quite so much.
These are just inside Europe but across the globe we’re seeing continued development of data localization laws, which organizations are mandated to adhere to. In the US, the California Consumer Privacy Act (CCPA) has kickstarted a swathe of data privacy reforms within many states, with many calls for something similar at the federal level.
The following year(s) will see that build and, much like with the GDPR, precedent-setting cases are needed to provide more clarity regarding the rules. Will Americans look to replace the shattered Privacy Shield framework, or will they adopt Standard Contractual Clauses (SCCs) more widely? SCCs are a very strong legal basis, providing the clauses are updated to align with the GDPR (something else we’d expect to see in 2021), and I suspect the US will take this road as the realization of the importance of trade with the EU grows.
Other noteworthy movements in data protection laws are happening in Russia with amendments to the Federal Law on Personal Data, which is taking a closer look at TLS as a protective measure, and in the Philippines, where the Personal Data Protection Act 2021 (PDPA) is being replaced by a new bill (currently a work in progress, but it’s coming).
One of the biggest events of 2021 will be the UK leaving the EU. The British implementation of the GDPR comes in the form of the UK Data Protection Bill 2018. Aside from a few deregulations, it’s the GDPR and that’s great… as far as it goes. Having strong local data privacy laws is good, but after enjoying 47 years (at the time of writing) of free movement within the Union, how will being outside of the EU impact British business?
It is thought and hoped that the UK will be granted an adequacy decision fairly swiftly, given that historically local UK laws aligned with those inside the Union, but there is no guarantee. The uncertainty around how data transfers will look in future might result in the British industry using more SCCs. The currently low priority plans to make Binding Corporate Rules (BCR) easier and more affordable will come sharply to the fore as the demand for them goes up.
One thing is certain, it’s going to be a fascinating year for data privacy and we are excited to see clearer definitions, increased certification, precedent-setting case law and whatever else unfolds as we continue to navigate a journey of governance, compliance and security.
The Sandworm Team hacking group is part of Unit 74455 of the Russian Main Intelligence Directorate (GRU), the US Department of Justice (DoJ) claimed as it unsealed an indictment against six hackers and alleged members on Monday.
Sandworm Team attacks
“These GRU hackers and their co-conspirators engaged in computer intrusions and attacks intended to support Russian government efforts to undermine, retaliate against, or otherwise destabilize: Ukraine; Georgia; elections in France; efforts to hold Russia accountable for its use of a weapons-grade nerve agent, Novichok, on foreign soil; and the 2018 PyeongChang Winter Olympic Games after Russian athletes were banned from participating under their nation’s flag, as a consequence of Russian government-sponsored doping effort,” the DoJ alleges.
“Their computer attacks used some of the world’s most destructive malware to date, including: KillDisk and Industroyer, which each caused blackouts in Ukraine; NotPetya, which caused nearly $1 billion in losses to the three victims identified in the indictment alone; and Olympic Destroyer, which disrupted thousands of computers used to support the 2018 PyeongChang Winter Olympics.”
At the same time, the UK National Cyber Security Centre says that they asses “with high confidence” that the group has been actively targeting organizations involved in the 2020 Olympic and Paralympic Games before they were postponed.
“In the attacks on the 2018 Games, the GRU’s cyber unit attempted to disguise itself as North Korean and Chinese hackers when it targeted the opening ceremony. It went on to target broadcasters, a ski resort, Olympic officials and sponsors of the games. The GRU deployed data-deletion malware against the Winter Games IT systems and targeted devices across the Republic of Korea using VPNFilter,” the UK NCSC said.
“The NCSC assesses that the incident was intended to sabotage the running of the Winter Olympic and Paralympic Games, as the malware was designed to wipe data from and disable computers and networks. Administrators worked to isolate the malware and replace the affected computers, preventing potential disruption.”
The UK government confirmed their prior assessments that many of the aforementioned attacks had been the work of the Russian GRU.
Sandworm Team hackers
Sandworm Team (aka “Telebots,” “Voodoo Bear,” “Iron Viking,” and “BlackEnergy”) is the group behind many conspicuous attacks in the last half a decade, the DoJ claims, all allegedly performed under the aegis of the Russian government.
The six alleged Sandworm Team hackers against which the indictments have been brought were responsible for a variety of tasks:
One of them, Anatoliy Kovalev, has been previously charged by a US court “with conspiring to gain unauthorized access into the computers of US persons and entities involved in the administration of the 2016 US elections,” the DoJ noted.
The US investigation into the group has lasted for several years, and had help from Ukrainian authorities, the Governments of the Republic of Korea and New Zealand, Georgian authorities, and the United Kingdom’s intelligence services, victims, and several IT and IT security companies.
Political and other ramifications
Warrants for the arrest of the six alleged Sandworm Team members have been drawn, but chances are slim-to-nonexistent that arrests will be performed in the near or far future.
The Russian government’s official position is that the accusations are unbased and part of an “information war against Russia”.
Russia starts responding to “accusations” of hacking operations. Chairman of the State Parliament committee on international affairs Dmitry Novikov says this is part of “information war against Russia”. https://t.co/ifSuCM23VN
— Lukasz Olejnik (@lukOlejnik) October 20, 2020
It’s unusual to see the US mount criminal charges against intelligence officers that were engaged in cyber-espionage operations outside the US, but the rationale here is that many of the attacks resulted in real-world consequences that were aimed at undermining the target countries’ governments and destabilizing the countries themselves, and that they affected individuals, civilian critical infrastructure (including organizations in the US), and private sector companies.
“The crimes committed by Russian government officials were against real victims who suffered real harm. We have an obligation to hold accountable those who commit crimes – no matter where they reside and no matter for whom they work – in order to seek justice on behalf of these victims,” commented US Attorney Scott W. Brady for the Western District of Pennsylvania.
There are currently no laws and norms regulating cyber attacks and cyber espionage in peacetime, but earlier this year Russian Federation president Vladimir Putin called for an agreement between Russia and the US that would guarantee the two nations would not try to meddle with each other’s elections and internal affairs via “cyber” means.
This latest round of indictments by the US is unlikely to act as a deterrent but, as Dr. Panayotis Yannakogeorgos recently told Help Net Security, indictments and public attribution of attacks serve several other purposes.
Another interesting result of this indictment may be felt by insurance companies and their customers that have suffered disruption due to cyber attacks mounted by nation-states. Some of their insurance policies may not cover cyber incidents that could be considered an “act of war” (e.g., the NotPetya attacks).
Increasingly demanded by consumers, data privacy laws can create onerous burdens on even the most well-meaning businesses. California presents plenty of evidence to back up this statement, as more than half of organizations that do business in California still aren’t compliant with the California Consumer Privacy Act (CCPA), which went into effect earlier this year.
As companies struggle with their existing compliance requirements, many fear that a new privacy ballot initiative – the California Privacy Rights Act (CPRA) – could complicate matters further. While it’s true that if passed this November, the CPRA would fundamentally change the way businesses in California handle both customer and employee data, companies shouldn’t panic. In fact, this law presents an opportunity for organizations to change their relationship with employee data to their benefit.
CPRA, the Californian GDPR?
Set to appear on the November 2020 ballot, the CPRA, also known as CCPA 2.0 or Prop 24 (its name on the ballot), builds on what is already the most comprehensive data protection law in the US. In essence, the CPRA will bring data protection in California nearer to the current European legal standard, the General Data Protection Regulation (GDPR).
In the process of “getting closer to GDPR,” the CCPA would gain substantial new components. Besides enhancing consumer rights, the CPRA also creates new provisions for employee data as it relates to their employers, as well as data that businesses collect from B2B business partners.
Although controversial, the CPRA is likely to pass. August polling shows that more than 80% of voters support the measure. However, many businesses do not. This is because, at first glance, the CPRA appears to create all kinds of legal complexities in how employers can and cannot collect information from workers.
Fearful of having to meet the same demanding requirements as their European counterparts, many organizations’ natural reaction towards the prospect of CPRA becoming law is fear. However, this is unfounded. In reality, if the CPRA passes, it might not be as scary as some businesses think.
CPRA and employment data
The CPRA is actually a lot more lenient than the GDPR in regard to how it polices the relationship between employers and employees’ data. Unlike for its EU equivalent, there are already lots of exceptions written into the proposed Californian law acknowledging that worker-employer relations are not like consumer-vendor relations.
Moreover, the CPRA extends the CCPA exemption for employers, set to end on January 1, 2021. This means that if the CPRA passes into law, employers would be released from both their existing and potential new employee data protection obligations for two more years, until January 1, 2023. This exemption would apply to most provisions under the CPRA, including the personal information collected from individuals acting as job applicants, staff members, employees, contractors, officers, directors, and owners.
However, employers would still need to provide notice of data collection and maintain safeguards for personal information. It’s highly likely that during this two-year window, additional reforms would be passed that might further ease employer-employee data privacy requirements.
Nonetheless, employers should act now
While the CPRA won’t change much overnight, impacted organizations shouldn’t wait to take action, but should take this time to consider what employee data they collect, why they do so, and how they store this information.
This is especially pertinent now that businesses are collecting more data than ever on their employees. With companies like the workplace monitoring company Prodoscore reporting that interest from prospective customers rose by 600% since the pandemic began, we are seeing rapid growth in companies looking to monitor how, where, and when their employees work.
This trend emphasizes the fact that the information flow between companies and their employees is mostly one-sided (i.e., from the worker to the employer). Currently, businesses have no legal requirement to be transparent about this information exchange. That will change for California-based companies if the CPRA comes into effect and they will have no choice but to disclose the type of data they’re collecting about their staff.
The only sustainable solution for impacted businesses is to be transparent about their data collection with employees and work towards creating a “culture of privacy” within their organization.
Creating a culture of privacy
Rather than viewing employee data privacy as some perfunctory obligation where the bare minimum is done for the sake of appeasing regulators, companies need to start thinking about worker privacy as a benefit. Presented as part of a benefits package, comprehensive privacy protection is a perk that companies can offer prospective and existing employees.
Privacy benefits can include access to privacy protection services that give employees privacy benefits beyond the workplace. Packaged alongside privacy awareness training and education, these can create privacy plus benefits that can be offered to employees alongside standard perks like health or retirement plans. Doing so will build a culture of privacy which can help companies ensure they’re in regulatory compliance, while also making it easier to attract qualified talent and retain workers.
It’s also worth bearing in mind that creating a culture of privacy doesn’t necessarily mean that companies have to stop monitoring employee activity. In fact, employees are less worried about being watched than they are by the possibility of their employers misusing their data. Their fears are well-founded. Although over 60% of businesses today use workforce data, only 3 in 10 business leaders are confident that this data is treated responsibly.
For this reason, companies that want to keep employee trust and avoid bad PR need to prioritize transparency. This could mean drawing up a “bill of rights” that lets employees know what data is being collected and how it will be used.
Research into employee satisfaction backs up the value of transparency. Studies show that while only 30% of workers are comfortable with their employer monitoring their email, the number of employees open to the use of workforce data goes up to 50% when the employer explains the reasons for doing so. This number further jumps to 92% if employees believe that data collection will improve their performance or well-being or come with other personal benefits, like fairer pay.
On the other hand, most employees would leave an organization if its leaders did not use workplace data responsibly. Moreover, 55% of candidates would not even apply for a job with such an organization in the first place.
With many exceptions for workplace data management already built-in and more likely to come down the line, most employers should be able to easily navigate the stipulations CPRA entails.
That being said, if it becomes law this November, employers shouldn’t misuse the two-year window they have to prepare for new compliance requirements. Rather than seeing this time as breathing space before a regulatory crackdown, organizations should instead use it to be proactive in their approach to how they manage their employees’ data. As well as just ensuring they comply with the law, businesses should look at how they can turn employee privacy into an asset.
As data privacy stays at the forefront of employees’ minds, businesses that can show they have a genuine privacy culture will be able to gain an edge when it comes to attracting and retaining talent and, ultimately, coming out on top.
Government and financial service sectors globally are the most hack-resistant industries in 2020, according to Synack.
Government and financial services scored 15 percent and 11 percent higher, respectively, than all other industries in 2020. Government agencies earned the top spot in part due to reducing the time it takes to remediate exploitable vulnerabilities by 73 percent.
Throughout the year, both sectors faced unprecedented challenges due to the global pandemic, but still maintained a commitment to thorough and continuous security testing that lessened the risk from cyberattacks.
“It’s a tremendously tough time for all organizations amidst today’s uncertainties. Data breaches are the last thing they need right now. That’s why it’s more crucial than ever to quickly find and fix potentially devastating vulnerabilities before they cause irreparable harm,” said Jay Kaplan, CEO at Synack. “If security isn’t a priority, trust can evaporate in an instant.”
The government sector earned 61 — the highest rating
The chaos of 2020 added new hardship to many government bodies, but security hasn’t necessarily suffered as many agencies have become more innovative and agile. Their ability to quickly remediate vulnerabilities drove this year’s top ranking.
Financial services scored 59 amidst massive COVID-19 disruptions
Financial services adapted quickly through the pandemic to help employees adjust to their new remote work realities and ensure customers could continue doing business. Continuous security testing played a significant role in the sector’s ARS.
Healthcare and life sciences scored 56 despite pandemic challenges
The rush to deploy apps to help with the COVID-19 recovery led to serious cybersecurity challenges for healthcare and life sciences. Despite those issues, the sector had the third highest average score as research and manufacturing organizations stayed vigilant and continuously tested digital assets.
ARS scores increase 23 percent from continuous testing
For organizations that regularly release updated code or deploy new apps, point-in-time security analysis will not pick up potentially catastrophic vulnerabilities. A continuous approach to testing helps ensure vulnerabilities are found and fixed quickly, resulting in a higher ARS metric.
The National Institute of Standards and Technology (NIST) has published a cybersecurity practice guide enterprises can use to recover from data integrity attacks, i.e., destructive malware and ransomware attacks, malicious insider activity or simply mistakes by employees that have resulted in the modification or destruction of company data (emails, employee records, financial records, and customer data).
About the guide
Ransomware is currently one of the most disruptive scourges affecting enterprises. While it would be ideal to detect the early warning signs of a ransomware attack to minimize its effects or prevent it altogether, there are still too many successful incursions that organizations must recover from.
Special Publication (SP) 1800-11, Data Integrity: Recovering from Ransomware and Other Destructive Events can help organizations to develop a strategy for recovering from an attack affecting data integrity (and to be able to trust that any recovered data is accurate, complete, and free of malware), recover from such an event while maintaining operations, and manage enterprise risk.
The goal is to monitor and detect data corruption in widely used as well as custom applications, and to identify what data way altered/corrupted, when, by whom, the impact of the action, whether other events happened at the same time. Finally, organizations are advised on how to restore data to its last known good configuration and to identify the correct backup version.
“Multiple systems need to work together to prevent, detect, notify, and recover from events that corrupt data. This project explores methods to effectively recover operating systems, databases, user files, applications, and software/system configurations. It also explores issues of auditing and reporting (user activity monitoring, file system monitoring, database monitoring, and rapid recovery solutions) to support recovery and investigations,” the authors added.
The National Cybersecurity Center of Excellence (NCCoE) at NIST used specific commercially available and open-source components when creating a solution to address this cybersecurity challenge, but noted that each organization’s IT security experts should choose products that will best work for them by taking into consideration how they will integrate with the IT system infrastructure and tools already in use.
The NCCoE tested the set up against several test cases (ransomware attack, malware attack, user modifies a configuration file, administrator modifies a user’s file, database or database schema has been altered in error by an administrator or script). Additional materials can be found here.
Confidence levels in securing the election are low, and declining, according to an ISACA survey of more than 3,000 IT governance, risk, security and audit professionals in the US.
While federal, state and local governments continue to harden election infrastructure technical controls and security procedures, 56 percent of respondents are less confident in election security since the pandemic started—signaling the need for greater education of the electorate and training of election personnel to drive awareness and trust.
Respondents say they believe that funding, legislation, technical controls and election infrastructure are all inadequate, including 63 percent who are not confident in the resilience of election infrastructure, and 57 percent who believe that funding is not sufficient to prevent hacking of elections.
Top threats to election security
Respondents identified the following as the top threats to election security:
- Misinformation/disinformation campaigns (73%)
- Tampering with tabulation of voter results (64%)
- Hacking or tampering with voter registration rolls
- Hacking or tampering with voting machines (both 62%)
The combination of low confidence and high perception of threats requires a call to action, according to retired Brigadier General Greg Touhill, ISACA board director and president of the AppGate Federal Group. “The overwhelming majority of localities have sound election security procedures in place, but the public’s perception does not match the reality.”
“This means that governments, from the county level on up, need to clearly and robustly communicate about what they are doing to secure their election infrastructure. As the study indicates, the most real threat to the election—impacting all candidates from all parties—is misinformation and disinformation campaigns.”
How to ensure voter confidence and accountability
The survey found that respondents believed the following actions could help ensure voter confidence and accountability:
- Educating the electorate about misinformation (65%)
- Using electronic voting machines with paper audit trails (64%)
- Increased training for election and election security personnel (62%)
The Internet Society has launched the first-ever regulatory assessment toolkit that defines the critical properties needed to protect and enhance the future of the Internet.
The Internet Impact Assessment Toolkit is a guide to help ensure regulation, technology trends and decisions don’t harm the infrastructure of the Internet. It describes the Internet at its optimal state – a network of networks that is universally accessible, decentralized and open; facilitating the free and efficient flow of knowledge, ideas and information.
Critical properties of the Internet Impact Assessment Toolkit
The five critical properties identified by the IWN are:
- An accessible infrastructure with a common protocol – A ‘common language’ enabling global connectivity and unrestricted access to the Internet.
- An open architecture of interoperable and reusable building blocks – Open infrastructure with a set of standards enabling permission-free innovation.
- Decentralized management and a single distributed routing system – Distributed routing enabling local networks to grow, while maintaining worldwide connectivity.
- Common global identifiers – A single common identifier allowing computers and devices around the world to communicate with each other.
- A technology neutral, general-purpose network – A simple and adaptable dynamic environment cultivating infinite opportunities for innovation.
When combined, these properties form the unique foundation that underpins the Internet’s success and are essential for its healthy evolution. The closer the Internet aligns with the IWN, the more open and agile it is for future innovation and the broader benefits of collaboration, resiliency, global reach and economic growth.
“The Internet’s ability to support the world through a global pandemic is an example of the Internet Way of Networking at its finest,” explains Joseph Lorenzo Hall, Senior VP for a Strong Internet, Internet Society. “Governments didn’t need to do anything to facilitate this massive global pivot in how humanity works, learns and socializes. The Internet just works – and it works thanks to the principles that underpin its success.”
A resource for policymakers and technologists
The Internet Impact Assessment Toolkit will serve as an important resource to help policymakers and technologists ensure trends in regulatory and technical proposals don’t harm the unique architecture of the Internet. The toolkit explains why each property of the IWN is crucial to the Internet and the social and economic consequences that can arise when any of these properties are damaged.
For instance, the Toolkit shows how China’s restrictive networking model severely impacts its global reach and hinders collaboration with networks beyond its borders. It also highlights how the US administration’s Clean Network proposal challenges the Internet’s architecture by dictating how networks interconnect according to political considerations rather than technical considerations.
“We’re seeing a trend of governments encroaching on parts of the Internet’s infrastructure to try and solve social and political problems through technical means. Ill-informed regulation can drastically alter the Internet’s fundamental architecture and harm the ecosystem that supports it,” continues Hall. “We’re giving both policymakers and Internet users the information and tools to make sure they don’t break this resource that brings connectivity, innovation, and empowerment to everyone.”
The 2020 United States presidential election is already off to a rocky start. We’ve seen technology fail in the primary elections, in-person campaigning halted, and a plethora of mixed messages on how voting will actually take place. Many Americans are still uncertain where or how they will vote in November – or worse, they’re unsure if their vote will be tabulated correctly.
For most of us, voting by anything other than a paper ballot or a voting machine is a foreign concept. Due to the pandemic and shelter in place restrictions, various alternatives have been considered this year — in particular, voting via our mobile devices.
On paper, it might seem like COVID-19 has created the ideal opportunity to introduce voting options that utilize the millions of mobile phones and tablets in U.S. voters’ hands. The reality is, our country is not ready to utilize this technology in a safe and protected way.
Here are the four things holding back mobile voting:
Testing and scalability
If we have learned anything from the Iowa Caucus app failure, it is that testing for scalability is key. Prior to Election Day, we must confirm that every voter will be able to vote from their mobile device from any location, all at the same time, without the system crashing.
This is no small feat: newly deployed code almost always has faults, and if a voting app has not undergone rigorous testing at scale by now (less than 75 days from Election Day), it is highly unlikely that it could be sufficiently tested and distributed in time.
Verification and secret ballots
Tying an identity to a user and phone negates the concept of an anonymized ballot, something we’re entitled to as eligible voters. If the vote is cast via a mobile device — especially if there is some way of reconciling the paper ballot back to the electronic vote — then there has to be an identity key that is used to correlate them.
Verifying the identity of the voter and their device and doing it in a way that also allows for secret ballots is a critical challenge to overcome if mobile voting is ever to become a reality.
Even if the kinks in mobile voting are worked out, how can we ensure overall trust in the system? Not only do we need to trust that our vote was cast, but that it was cast in a way that is private, secure, and for the person it was intended. If there is no reconciliation with the paper ballot, how are any risk-limiting audits conducted? Without an auditable system, it is impossible to win the trust of the electorate, which is an absolute necessity ahead of a process as integral to our country as voting.
QR code risks
Chances are, voters would be directed to a voting website via a QR code. While the reliance on distributed ledger technology — even with a cryptographic signature that is highly resistant to alteration — provides a strong method of recording and tabulating votes, it is still not cyber-invincible.
QR codes are not “readable” by humans. Therefore, the ability to alter a QR code to point to an alternative resource without being detected is simple and highly effective. The target of the QR code could result in compromise of credentials, phishing, and malicious code downloads.
Most significantly in this scenario, the QR code could redirect the voter to a site where their vote is captured, altered, returned to the device or forwarded on to the actual site, and when the voter signs the affidavit and submits their vote, it may or may not be for who they actually intended to vote.
Ultimately, the most important thing we can do this election is vote — vote by mail, vote in person, vote early, and vote in a way that you can be sure your vote will be counted for the candidate for whom you intended to vote. However, the idea that we’ll be able to safely via our mobile devices — at least this time around — is nothing but a pipe dream. Until we work out the security and privacy concerns associated with mobile voting, we’re going to have to stick to traditional methods.
Army researchers have been awarded a patent for inventing a practical method for Army wireless devices to covertly authenticate and communicate. Photo by Jason Edwards Securing Army wireless devices Authentication is one of the core pillars of wireless communications security, along with secrecy and privacy. The value of authentication in a military setting is readily apparent and mandatory. Receivers verify that an incoming transmission did indeed come from an ally and not a malicious adversary, … More
As consumers’ concerns about their digital privacy continue to grow and who is responsible for guarding it remains unclear, new research conducted by Ponemon Institute reveals a lack of empowerment consumers feel when it comes to their data privacy.
Address privacy risks
The research points to a privacy gap between the consumer data protection individuals want and what industry and regulators provide. While the majority of consumers want their data protected, they’re still waiting on — or expecting – the federal government or industries to provide this protection.
For instance, 60% of consumers believe government regulation should help address the privacy risks facing consumers today, of which 34% say government regulation is needed to protect personal privacy and 26% believe a hybrid option (regulation and self-regulation) should be pursued.
“This research revealed much of the tension surrounding digital privacy today. Based on my polling experience, these findings make a compelling case for the important role identity protection products and services play in protecting consumers’ privacy. The study shows that many consumers are alarmed by the uptick in privacy scandals and want to protect their information, but don’t know how to and feel like they lack the right tools to do so,” said Dr. Larry Ponemon, chairman of Ponemon Institute.
Interestingly, the study found that 64% of consumers say they think it is “creepy” when they receive online ads that are relevant to them, but not based on their online search behavior or publicly available information. This confirms that many consumers experience this phenomenon and are alarmed by it. In addition, 73% of consumers say advertisers should allow them to “opt-out” of receiving ads on any specific topic at any time.
This research also reveals a lack of empowerment that consumers feel in their ability to protect their privacy. While 74% of consumers say they have no control over the personal information that is collected on them, they are not taking action to limit the data they provide when using online services. In fact, 54% of consumers say they do not consciously limit what personal data they are providing. This lack of empowerment can have devastating effects on consumers’ privacy if it goes unchecked.
Other key findings
Consumer concern is increasing: 68% of consumers are more concerned about the privacy and security of their personal information than they were three years ago. Three-fourths of consumers (75%) in the over 55 age group have become more concerned about their privacy over the past three years.
Search engines least trusted: 92% of consumers believe search engines are sharing and selling their private data, 78% believe social media platforms are and 63% of consumers think shopping sites are as well. Similarly, 86% of respondents say they are very concerned when using Facebook and Google and 66% of respondents say they are very concerned when shopping online or using online services.
Seniors against advertising tracking: 78% of older consumers say advertisers should not be able to serve ads based on their conversations and messaging.
Consumers have little hope in websites’ ad blocking: Only 33% of consumers expect websites to have an ad blocker that stops tracking and only 17% of consumers say they expect websites to limit the collection and sharing of personal information.
Split responsibility: 54% of consumers say online service providers should be accountable for protecting the privacy of consumers, while 45% say they themselves should assume responsibility.
How consumers protect themselves: 65% of consumers are using some type of privacy protection provided by their devices. Of these, 25% are setting a more restrictive data sharing setting, 21% are using both additional authentication controls and a more restrictive data sharing setting and 19% are using additional authentication controls.
Half of consumers are aware of the availability of protections: Of the protections available to consumers to protect their personal information, 52% say opting out of data collection and 48% say data sharing and encryption of personal information are available, respectively.
With fewer than 100 days left until Election Day, a new report from Area 1 Security reveals that states are still in widely varying stages of cybersecurity readiness.
Key findings include:
- The majority (53.24 percent) of state and local election administrators have only rudimentary or non-standard technologies to protect themselves from phishing
- Fewer than 3 out of 10 (28.14 percent) election administrators have basic controls to prevent phishing
- Fewer than 2 out of 10 (18.61 percent) election administrators have implemented advanced anti-phishing cybersecurity controls
- A surprising 5.42 percent of election administrators rely on personal email accounts or technologies designed for personal email (such as Yahoo!, Hotmail, AOL or others), to conduct their duties
- A number of election administrators independently manage their own custom email infrastructure, including using versions of Exim known to be targeted by cyber actors linked to the Russian military that interfered in prior U.S. elections.
Ninety-five percent of cybersecurity damages worldwide begin with phishing, and phishing campaigns come in all shapes and sizes. The majority of phishing campaigns begin with an innocuous and authentic email that individuals are unable to recognize as malicious. Consequently, the quality of email protection used by organizations and individuals has an inordinate bearing on their overall cybersecurity posture.
“Our elections are vital. They need to be resilient against whatever crisis the moment throws at us — and that requires resources and planning,” said Oren J. Falkowitz, co-founder of Area 1 Security. “However, most state and local election administrators are not very close to ensuring a safe election. This challenge is going to be exacerbated the longer it takes for them to get the resources and expertise needed to make changes.”
Security recommendations for state and local election administrators
Ending use of Exim email servers: Given the government’s guidance to update Exim to mitigate CVE-2019-10149 and other vulnerabilities including, but not limited to, CVE-2019-15846 and CVE-2019-16928, election administrators are urged to cease use of Exim. Upgrading alone does not mitigate exploitation. Prior Russian cyber activities directed towards U.S. elections make use of Exim ill-advised. For those who must continue running Exim, update to the latest version; running a version prior to 4.93 leaves a system vulnerable to disclosed vulnerabilities. Administrators can update Exim Mail Transfer Agent software through their Linux distribution’s package manager or by downloading the latest version.
Transitioning to cloud email infrastructure: Running custom email infrastructure requires network administrators to be perfect every single day. Instead, Area 1 Security recommends the use of cloud email infrastructure such as Google’s GSuite or Microsoft’s Office 365 in combination with a cloud email security solution.
Ending use of personal email technologies for election duties: Under no circumstances should election administrators use personal email for the conduct or administration of elections.
The U.S. Department of Energy (DOE) unveiled a report that lays out a blueprint strategy for the development of a national quantum internet. It provides a pathway to ensure the development of the National Quantum Initiative Act, which was signed into law by President Trump in December of 2018.
Around the world, consensus is building that a system to communicate using quantum mechanics represents one of the most important technological frontiers of the 21st century. Scientists now believe that the construction of a prototype will be within reach over the next decade.
In February of this year, DOE National Laboratories, universities, and industry met to develop the blueprint strategy of a national quantum internet, laying out the essential research to be accomplished, describing the engineering and design barriers, and setting near-term goals.
“The Department of Energy is proud to play an instrumental role in the development of the national quantum internet,” said U.S. Secretary of Energy Dan Brouillette. “By constructing this new and emerging technology, the United States continues with its commitment to maintain and expand our quantum capabilities.”
DOE’s 17 National Laboratories will serve as the backbone of the coming quantum internet, which will rely on the laws of quantum mechanics to control and transmit information more securely than ever before. Currently in its initial stages of development, the quantum internet could become a secure communications network and have a profound impact on areas critical to science, industry, and national security.
Crucial steps toward building such an internet are already underway in the Chicago region, which has become one of the leading global hubs for quantum research. In February of this year, scientists from DOE’s Argonne National Laboratory in Lemont, Illinois, and the University of Chicago entangled photons across a 52-mile “quantum loop” in the Chicago suburbs, successfully establishing one of the longest land-based quantum networks in the nation. That network will soon be connected to DOE’s Fermilab in Batavia, Illinois, establishing a three-node, 80-mile testbed.
“Decades from now, when we look back to the beginnings of the quantum internet, we’ll be able to say that the original nexus points were here in Chicago—at Fermilab, Argonne, and the University of Chicago,” said Nigel Lockyer, director of Fermilab. “As part of an existing scientific ecosystem, the DOE National Laboratories are in the best position to facilitate this integration.”
A range of unique abilities
One of the hallmarks of quantum transmissions is that they are exceedingly difficult to eavesdrop on as information passes between locations. Scientists plan to use that trait to make virtually unhackable networks. Early adopters could include industries such as banking and health services, with applications for national security and aircraft communications. Eventually, the use of quantum networking technology in mobile phones could have broad impacts on the lives of individuals around the world.
Scientists are also exploring how the quantum internet could expedite the exchange of vast amounts of data. If the components can be combined and scaled, society may be at the cusp of a breakthrough in data communication, according to the report.
Finally, creating networks of ultra-sensitive quantum sensors could allow engineers to better monitor and predict earthquakes—a longtime and elusive goal—or to search for underground deposits of oil, gas, or minerals. Such sensors could also have applications in health care and imaging.
A multi-lab, multi-institution effort
Creating a full-fledged prototype of a quantum internet will require intense coordination among U.S. Federal agencies—including DOE, the National Science Foundation, the Department of Defense, the National Institute for Standards and Technology, the National Security Agency, and NASA—along with National Laboratories, academic institutions, and industry.
The report lays out crucial research objectives, including building and then integrating quantum networking devices, perpetuating and routing quantum information, and correcting errors. Then, to put the nationwide network into place, there are four key milestones: verify secure quantum protocols over existing fiber networks, send entangled information across campuses or cities, expand the networks between cities, and finally expand between states, using quantum “repeaters” to amplify signals.
“The foundation of quantum networks rests on our ability to precisely synthesize and manipulate matter at the atomic scale, including the control of single photons,” said David Awschalom, Liew Family Professor in Molecular Engineering at the University of Chicago’s Pritzker School of Molecular Engineering, senior scientist at Argonne National Laboratory, and director of the Chicago Quantum Exchange. “Our National Laboratories house world-class facilities to image materials with subatomic resolution and state-of-the-art supercomputers to model their behavior. These powerful resources are critical to accelerating progress in quantum information science and engineering, and to leading this rapidly evolving field in collaboration with academic and corporate partners.”
Other National Laboratories are also driving advances in quantum networking and related technologies. For example, Stony Brook University and Brookhaven National Laboratory, working with the DOE’s Energy Sciences Network headquartered at Lawrence Berkeley National Laboratory, have established an 80-mile quantum network testbed and are actively expanding it in New York State and at Oak Ridge and Los Alamos National Laboratories. Other research groups are focused on developing a quantum cryptography system with highly secured information.
Security researchers have analyzed contact-tracing mobile apps from around the globe and found that their developers have generally failed to implement suitable security and privacy protections.
The results of the analysis
In an effort to stem the spread of COVID-19, governments are aiming to provide their citizenry with contact-tracing mobile apps. But, whether they are built by a government entity or by third-party developers contracted to do the job, security has largely taken a backseat to speed.
Guardsquare researchers have unpacked and decompiled 17 Android contact-tracing apps from 17 countries to see whether developers implement name obfuscation, string, asset/resource and class encryption. They’ve also checked to see whether the apps will run on rooted devices or emulators (virtual devices).
- Only 41% of the apps have root detection
- Only 41% include some level of name obfuscation
- Only 29% include string encryption
- Only 18% include emulator detection
- Only 6% include asset / resource encryption
- Only 6% include class encryption.
The percentages vary according to region (see above). Grant Goodes, Chief Scientist at Guardsquare, though made sure to note that they have not checked all existing contact-tracing apps, but that the sample they did test “provides a window into the security flaws most contact tracing apps contain.”
Security promotes trust
The looked-for protections should make it difficult for malicious actors to tamper with and “trojanize” the legitimate apps.
Name obfuscation, for example, hides identifiers in the application’s code to prevent hackers from reverse engineering and analyzing source code. String encryption prevents hackers from extracting API keys and cryptographic keys included in the source code, which could be used by attackers to decrypt sensitive data (for identity theft, blackmailing, and other purposes), or to spoof communications to the server (to disrupt the contact-tracing service).
Asset/resource encryption should prevent hackers from accessing/reusing files that the Android OS uses to render the look and feel of the application (e.g., screen-layouts, internationalized messages, etc.) and custom/low-level files that the application may need for its own purposes.
These security and privacy protections are important for every mobile app, not just contact-tracing apps, Goodes noted, but they are particularly salient for the latter, since some of them are mandatory for citizens to use and since their efficacy hinges on widespread adoption.
“When security flaws are publicized, the whole app is suddenly distrusted and its utility wanes as users drop off. In the case of countries who build their own apps, this can erode citizen trust in the government as well, which further increases public health risks,” he added.
The COVID-19 pandemic has forced public health, supply chain, transportation, government, economic and many other entities to interact in real time. One of the challenges in large systems interacting in this way is that even tiny errors in one system can cause devastating effects across the entire system chain.
Now, Purdue University innovators have come up with a possible solution: a set of patented algorithms that predict, identify, diagnose and prevent abnormalities in large and complex systems.
“It has been proven again and again that large and complex systems can and will fail and cause catastrophic impact,” said Shimon Y. Nof, a Purdue professor of industrial engineering and director of Purdue’s PRISM Center.
“Our technology digests the large amount of data within and across systems and determines the sequence of resolving interconnected issues to minimize damage, prevent the maximum number of errors and conflicts from occurring, and achieve system objectives through interaction with decision makers and experts.”
Applying systems science and data science to solve problems
Nof said this technology would be helpful for smart grids, healthcare systems, supply chains, transportation systems and other distributed systems that deal with ubiquitous abnormalities and exceptions, and are vulnerable to cascading or large amount of failures.
This technology integrates constraint modeling, network science, adaptive algorithms and state-of-the-art decision support systems.
“Our algorithms and solution apply systems science and data science to solve problems that encompass time, space and disciplines, which is the core of industrial engineering,” said Xin Chen, a former graduate student in Nof’s lab who helped create the technology.
Nof said the novelty of the technology lies in three main areas. First, analytical and data mining tools extract underlying network structures of a complex system and determine its unique features. A robust set of algorithms then are analyzed based on the objectives for system performance, structures and features of fault networks in the system.
Finally, algorithms with specific characteristics are applied to manage errors and conflicts to achieve desired system performance.
As the power of IoT devices increases, security has failed to follow suit. This is a direct result of the drive to the bottom for price of network enabling all devices.
But small steps can greatly increase the overall security of IoT.
A better IoT security story has to be one of the most urgent priorities in all of technology. That’s because IoT is one of the industry’s most compelling opportunities and squandering it due to security challenges would be a massive blunder – especially since those challenges are surmountable.
There’s a good reason IoT has become an ever-present buzzword: it has the potential to change many aspects of life and is brimming with opportunities for exciting innovation. This is especially true on the industrial side, where the technology is fueling advances in digital factories, power management, supply-chain optimization, the connected car, and robotics.
Indeed, many companies are moving beyond piloting and prototyping IoT projects to real-world applications. Many are incorporating machine learning and other artificial intelligence (AI) technologies to gain insights from the colossal amounts of data all these sensors and other devices produce.
Yet lack of security continues to threaten the progress of this game-changing technology.
Various research has shown that security is the number one concern for enterprise IoT customers and that they would move faster on IoT programs if their concerns were allayed.
More than three years have passed since the IoT security threat crashed into public view with the massive denial-of-service attack on a major DNS provider which caused outages of some of the web’s most popular sites. The attack was instigated by a botnet of around 145,000 IoT devices – mostly webcams and DVRs – compromised by Mirai malware. In the intervening years, IoT botnets have grown in size and so has the number of attacks fueled by them. But IoT has other troubling security issues, as demonstrated by the rash of IoT locks with glaring security holes in the past year.
The incident should have served as a rallying point for concerted industry action to address IoT security, but little progress has been made.
What’s taking so long?
A primary IoT selling point – the advent of inexpensive sensors and devices – is also a thorn in IoT security’s side. Many manufacturers are pumping out these things without properly securing them for the internet. Many companies, simply looking for the cheapest deals to keep IoT project costs down, buy them without amply considering their security readiness.
Too many devices are being shipped to customers with no password or a standard, hard-coded default password that can easily be discovered and exploited. (The start of the now 400,000 strong Mirai BotNet was a single list of 60 usernames and passwords.)
Beyond passwords, many devices simply are not designed with security in mind at both the software and hardware levels. For example, configuration bit streams should be encrypted and protected, but often aren’t.
Another issue is a lack of software updates. When an attack or vulnerability is discovered, updates are not always rolled out in a timely manner – and sometimes not at all.
While IoT security guidelines exist – for example, the Secure By Design code issued by the U.K. in 2018 – they’re seldom enforced. Contrast that with the payment card industry (PCI), which polices itself with rigid security standards and levies penalties on member companies that fail to follow them.
The IoT segment needs to get serious
Awareness of the IoT security issue has reached government awareness. In the U.S., a Senate bill introduced in 2019 and similar legislation in the House would require the National Institute of Standards and Technology (NIST) and the Office of Management and Budget (OMB) to take steps to increase the security of IoT devices.
In California, a law took effect on Jan. 1, 2020 requiring that all connected devices sold in the state have “a reasonable security feature or features” and banning shared default passwords.
While this government action is a positive sign, industry typically moves faster than government and IoT manufacturers themselves should take greater responsibility for improved security.
A good start might be an IoT security equivalent to the Energy Star certification for energy efficiency of appliances, electronics, HVAC systems, etc. Energy Star is actually a U.S.-government-backed program, but IoT is moving so fast that I think the industry could get this done faster than waiting for the public sector.
It is up to the industry to once and for all deal with the security challenge or face the prospect that IoT will never achieve its enormous promise and all of us will be paying the price for years from vulnerable devices in the field.
May 25th is the second anniversary of the General Data Protection Regulation (GDPR) and data around compliance with the regulation shows a significant disconnect between perception and reality.
Only 28% of firms comply with GDPR; however, before GDPR kicked off, 78% of companies felt they would be ready to fulfill data requirements. While their confidence was high, when push comes to shove, complying with GDPR and GDPR-like laws – like CCPA and PDPA – are not as easy as initially thought.
Data privacy efforts
While crucial, facing this growing set of regulations is a massive, expensive undertaking. If a company is found out of compliance with GDPR, it’s looking at upwards of 4% of annual global turnover. To put that percentage in perspective, of the 28 major fines handed down since the GDPR took effect in May 2018, that equates to $464 million dollars spent on fines – a hefty sum for sure.
Additionally, there is also a cost to comply – something nearly every company faces today if they conduct business on a global scale. For CCPA alone, the initial estimates for getting California businesses into compliance is estimated at around $55 billion dollars, according to the State of California DoJ. That’s just to comply with one regulation.
Here’s the reality: compliance is incredibly expensive, but not quite as expensive as being caught being noncompliant. This double-edged sword is unfortunate, but it is the world we live in. So, how should companies navigate in today’s world to ensure the privacy rights of their customers and teams are protected without missing the mark on any one of these regulatory requirements?
Baby steps to compliance
A number of companies are approaching these various privacy regulations one-by-one. However, taking a separate approach for each one of these regulations is not only extremely laborious and taxing on a business, it’s unnecessary.
Try taking a step back and identifying the common denominator across all of the regulations. You’ll find that in the simplest form, it boils down to knowing what data you actually have and putting the right controls in place to ensure you can properly safeguard it. Implementing this common denominator approach can free up a lot of time, energy and resources dedicated to data privacy efforts across the board.
Consider walking through these steps when getting started: First, identify the sensitive data being housed within systems, databases and file stores (i.e. Box, Sharepoint, etc.). Next, identify who has access to what so that you can ensure that only the right people who ‘should’ have access do. This is crucial to protecting customer information. Lastly, implement controls to keep employee access updated. Using policies to keep access consistent is important, but it’s crucial that they are updated and stay current with any organizational changes.
Staying ahead of the game
The only way to stay ahead of the numerous privacy regulations is to take a general approach to privacy. We’ve already seen extensions on existing regulations, like The California Privacy Rights and Enforcement Act of 2020. ‘CCPA 2.0’ as some people call it, would be an amendment to the CCPA. So, if this legislation takes effect, it would create a whole new set of privacy rights that align well with GDPR, putting greater safeguards around protecting sensitive personal information. It’s my opinion that since the world has begun recognizing privacy rights are more invaluable than ever, that we’ll continue to see amendments piggybacking on existing regulations across the globe.
While many of us have essentially thrown in the towel, knowing that our own personal data is already out there on the dark web, it doesn’t mean that we can all sit back and let this continue to happen. Considering, this would be to the detriment of our customers’ privacy, cost-prohibitive and ineffective.
So, what are the key takeaways? Make your data privacy efforts just as central as the rest of your security strategy. Ensure it is holistic and takes into account all facts and overlaps in the various regulations we’re all required to comply with today. Only then do you stand a chance at protecting your customers and your employees’ data and dodge becoming another news headline and a tally on the GDPR fine count.