The race is on to build the world’s first reliable and truly useful quantum computer, and the finish line is closer than you might think – we might even reach it this decade. It’s an exciting prospect, particularly as these super-powerful machines offer huge potential to almost every industry, from drug development to electric-vehicle battery design.
But quantum computers also pose a big security problem. With exponentially higher processing power, they will be able to smash through the public-key encryption standards widely relied on today, threatening the security of all digital information and communication.
While it’s tempting to brush it under the carpet as “tomorrow’s problem”, the reality of the situation is much more urgent. That’s because quantum computers don’t just pose a threat to tomorrow’s sensitive information: they’ll be able to decrypt data that has been encrypted in the past, that’s being encrypted in the present, and that will be encrypted in the future (if quantum-resistant algorithms are not used).
It’s why the NSA warned, as early as 2015, that we “must act now” to defuse the threat, and why the US National Institute of Standards and Technology (NIST) is racing to standardize new post-quantum cryptographic solutions, so businesses can get a trusted safety net in place before the threat materializes.
From aviation to pharma: The industries at risk
The harsh reality is that no one is immune to the quantum threat. Whether it’s a security service, pharmaceutical company or nuclear power station, any organization holding sensitive information or intellectual property that needs to be protected in the long term has to take the issue seriously.
The stakes are high. For governments, a quantum attack could mean a hostile state gains access to sensitive information, compromising state security or revealing secrets that undermine political stability. For pharmaceuticals, on the other hand, a quantum computer could allow competitors to gain access to valuable intellectual property, hijacking a drug that has been in costly development for years. (As we’re seeing in the race for a COVID-19 vaccine, this IP can sometimes have significant geopolitical importance.)
Hardware and software are also vulnerable to attack. Within an industry like aviation, a quantum-empowered hacker would have the ability to forge the signature of a software update, push that update to a specific engine part, and then use that to alter the operations of the aircraft. Medical devices like pacemakers would be vulnerable to the same kind of attack, as would connected cars whose software is regularly updated from the cloud.
Though the list of scenarios goes on, the good news is that companies can ready themselves for the quantum threat using technologies available today. Here’s how:
1. Start the conversation early
Begin by promoting quantum literacy within your business to ensure that executive teams understand the severity and immediacy of the security threat. Faced with competing priorities, they may otherwise struggle to understand why this issue deserves immediate attention and investment.
It’s your job to make sure they understand what they’re up against. Identify specific risks that could materialize for your business and industry – what would a quantum attack look like, and what consequences would you be facing if sensitive information were to be decrypted?
Paint a vivid picture of the possible scenarios and calculate the cost that each one would have for your business, so everyone knows what’s at stake. By doing so, you’ll start to build a compelling business case for upgrading your organization’s information security, rather than assuming that this will be immediately obvious.
2. Work out what you’ve got and what you still need
Do a full audit of every place within your business where you are using cryptography, and make sure you understand why that is. Surprisingly, many companies have no idea of all the encryption they currently have in place or why, because the layers of protection have been built up in a siloed fashion over many years.
What cryptographic standards are you relying on today? What data are you protecting, and where? Try to pinpoint where you might be vulnerable. If you’re storing sensitive information in cloud-based collaboration software, for example, that may rely on public key cryptography, so won’t be quantum-secure.
As part of this audit, don’t forget to identify the places where data is in transit. However well your data is protected, it’s vulnerable when moving from one place to another. Make sure you understand how data is moving within your business – where from and to – so you can create a plan that addresses these weak points.
It’s also vital that you think about what industry regulations or standards you need to comply with, and where these come into play across the areas of your business. For industries like healthcare or finance, for example, there’s an added layer of regulation when it comes to information security, while privacy laws like the GDPR and CCPA will apply if you hold personal information relating to European or Californian citizens.
3. Build a long-term strategy for enhanced security
Once you’ve got a full view of what sensitive data you hold, you can start planning your migration to a quantum-ready architecture. How flexible is your current security infrastructure? How crypto-agile are your cryptography solutions? In order to migrate to new technology, do you need to rewrite everything, or could you make some straightforward switches?
Post-quantum encryption standards will be finalized by NIST in the next year and a half, but the process is already underway, and the direction of travel is becoming clearer. Now that finalist algorithms have been announced, businesses don’t need to wait to get quantum-secure – they must simply ensure that they design their security infrastructure to work with any of the shortlisted approaches that NIST is currently considering for standardization.
Deploying a hybrid solution – pairing existing solutions with one of the post-quantum schemes named as a NIST finalist – can be a good way to build resilience and flexibility into your security architecture. By doing this, you’ll be able to comply with whichever new industry standards are announced and remain fully protected against present and future threats in the meantime.
Whatever you decide, remember that migration can take time – especially if your business is already built on a complex infrastructure that will be hard to unpick and rebuild. Put a solid plan in place before you begin and consider partnering with an expert in the field to speed up the process.
A risk we can’t see
Just because a risk hasn’t yet materialized, doesn’t mean it isn’t worth preparing for (a mindset that could have come in handy for the coronavirus pandemic, all things considered…).
The quantum threat is serious, and it’s urgent. The good thing is that we already have all the ingredients to get a safety net in place, and thanks to strong mathematical foundations, we can be confident in the knowledge that the algorithms being standardized by NIST will protect businesses from even the most powerful computers.
The next step? Making sure this cutting-edge technology gets out of the lab and into the hands of the organizations who need it most.
Connected devices are becoming more ingrained in our daily lives and the burgeoning IoT market is expected to grow to 41.6 billion devices by 2025. As a result of this rapid growth and adoption at the consumer and commercial level, hackers are infiltrating these devices and mounting destructive hacks that put sensitive information and even lives at risk.
These attacks and potential dangers have kept security at top of mind for manufacturers, technology companies and government organizations, which ultimately led to the U.S. House of Representatives passing the IoT Cybersecurity Improvement Act of 2020.
The bill focuses on increasing the security of federal devices with standards provided by the National Institute of Standards and Technology (NIST), which will cover devices from development to the final product. The bill also requires Homeland Security to review and revisit the legislation up to every five years and revise it as necessary, which will keep it up to date with the latest innovative tech and new standards that might come along with it.
Although it is a step in the right direction to tighten security for federal devices, it only scratches the surface of what the IoT industry needs as a whole. However, as this bill is the first of its kind to be passed by the House, we need to consider how it will help shape the future of IoT security:
Better transparency throughout the device lifecycle
With a constant focus on innovation in the IoT industry, oftentimes security is overlooked in order to rush a product onto shelves. By the time devices are ready to be purchased, important details like vulnerabilities may not have been disclosed throughout the supply chain, which could expose and exploit sensitive data. To date, many companies have been hesitant to publish these weak spots in their device security in order to keep it under wraps and their competition and hackers at bay.
However, now the bill mandates contractors and subcontractors involved in developing and selling IoT products to the government to have a program in place to report the vulnerabilities and subsequent resolutions. This is key to increasing end-user transparency on devices and will better inform the government on risks found in the supply chain, so they can update guidelines in the bill as needed.
For the future of securing connected devices, multiple stakeholders throughout the supply chain need to be held accountable for better visibility and security to guarantee adequate protection for end-users.
Public-private partnerships on the rise
Per this bill, for the development of the security guidelines, the government will need to consult with cybersecurity experts to align on industry standards and best practices for better IoT device protection.
Working with industry-led organizations can provide accurate insight and allow the government to see current loopholes to create standards for real-world application. Encouraging these public-private partnerships is essential to advancing security in a more holistic way and will ensure guidelines and standards aren’t created in a silo.
Shaping consumer security from a federal focused bill
The current bill only focuses on securing devices on a federal level, but with the crossover from manufacturers and technology companies working in both the commercial/government and consumer space, naturally this bill will infiltrate the consumer device market too. It’s not practical for a manufacturer to follow two separate guidelines for both categories of products, so those standards in place for government contracted devices will likely be applied to all devices on the assembly line.
As the focus will shift to consumer safety after this bill, the challenge for manufacturers to eventually test products against two bills – one with federal and one with consumer standards – has been raised in the industry. The only way to remedy the issue is if there are global, adoptable and scalable standards across all industries to streamline security and provide appropriate protection for all device categories.
Universal standards – Are we there yet?
While this bill is a great start for the IoT industry and may serve as the catalyst for future IoT bills, there is still some room for improvement for the future of connected device security. In its current form, the bill does not explicitly define the guidelines for security, which can be frustrating and confusing for IoT device stakeholders who need to comply with them. With multiple government organizations and industry-led programs creating their own set of standards, the only way to truly propel this initiative forward is to harmonize and clearly define standards for universal adoption.
While the IoT bill signals momentum from the US government to prioritize IoT security, an international effort needs to be in place for establishing global standards and protecting connected devices must be made, as the IoT knows no boundaries. Syncing these standards and enforcing them through trusted certification programs will hold manufacturers and tech companies accountable for security and provide transparency for all end-users on a global scale.
The IoT Cybersecurity Improvement Act of 2020 is a landmark accomplishment for the IoT industry but is only just the beginning. As the world grows more integrated through connected devices, security standards will need to evolve to keep up with the digital transformation occurring in nearly every industry.
Due to security remaining a key concern for device manufacturers, tech companies, consumers and government organizations, the need for global standards remains in focus and will likely need an act of Congress to make them a priority.
According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.
As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.
Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.
All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.
In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.
A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.
In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.
Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.
Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.
Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.
Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.
Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.
NIST has launched a crowdsourcing challenge to spur new methods to ensure that important public safety data sets can be de-identified to protect individual privacy.
The Differential Privacy Temporal Map Challenge includes a series of contests that will award a total of up to $276,000 for differential privacy solutions for complex data sets that include information on both time and location.
Critical applications vulnerability
For critical applications such as emergency planning and epidemiology, public safety responders may need access to sensitive data, but sharing that data with external analysts can compromise individual privacy.
Even if data is anonymized, malicious parties may be able to link the anonymized records with third-party data and re-identify individuals. And, when data has both geographical and time information, the risk of re-identification increases significantly.
“Temporal map data, with its ability to track a person’s location over a period of time, is particularly helpful to public safety agencies when preparing for disaster response, firefighting and law enforcement tactics,” said Gary Howarth, NIST prize challenge manager.
“The goal of this challenge is to develop solutions that can protect the privacy of individual citizens and first responders when agencies need to share data.”
Differential privacy provides much stronger data protection than anonymity; it’s a provable mathematical guarantee that protects personally identifiable information (PII).
By fully de-identifying data sets containing PII, researchers can ensure data remains useful while limiting what can be learned about any individual in the data regardless of what third-party information is available.
The individual contests that make up the challenge will include a series of three “sprints” in which participants develop privacy algorithms and compete for prizes, as well as a scoring metrics development contest (A Better Meter Stick for Differential Privacy Contest) and a contest designed to improve the usability of the solvers’ source code (The Open Source and Development Contest).
The Better Meter Stick for Differential Privacy Contest will award a total prize purse of $29,000 for winning submissions that propose novel scoring metrics by which to assess the quality of differentially private algorithms on temporal map data.
The three Temporal Map Algorithms sprints will award a total prize purse of $147,000 over a series of three sprints to develop algorithms that preserve data utility of temporal and spatial map data sets while guaranteeing privacy.
The Open Source and Development Contest will award a total prize purse of $100,000 to teams leading in the sprints to increase their algorithm’s utility and usability for open source audiences.
The National Institute of Standards and Technology (NIST) has published a cybersecurity practice guide enterprises can use to recover from data integrity attacks, i.e., destructive malware and ransomware attacks, malicious insider activity or simply mistakes by employees that have resulted in the modification or destruction of company data (emails, employee records, financial records, and customer data).
About the guide
Ransomware is currently one of the most disruptive scourges affecting enterprises. While it would be ideal to detect the early warning signs of a ransomware attack to minimize its effects or prevent it altogether, there are still too many successful incursions that organizations must recover from.
Special Publication (SP) 1800-11, Data Integrity: Recovering from Ransomware and Other Destructive Events can help organizations to develop a strategy for recovering from an attack affecting data integrity (and to be able to trust that any recovered data is accurate, complete, and free of malware), recover from such an event while maintaining operations, and manage enterprise risk.
The goal is to monitor and detect data corruption in widely used as well as custom applications, and to identify what data way altered/corrupted, when, by whom, the impact of the action, whether other events happened at the same time. Finally, organizations are advised on how to restore data to its last known good configuration and to identify the correct backup version.
“Multiple systems need to work together to prevent, detect, notify, and recover from events that corrupt data. This project explores methods to effectively recover operating systems, databases, user files, applications, and software/system configurations. It also explores issues of auditing and reporting (user activity monitoring, file system monitoring, database monitoring, and rapid recovery solutions) to support recovery and investigations,” the authors added.
The National Cybersecurity Center of Excellence (NCCoE) at NIST used specific commercially available and open-source components when creating a solution to address this cybersecurity challenge, but noted that each organization’s IT security experts should choose products that will best work for them by taking into consideration how they will integrate with the IT system infrastructure and tools already in use.
The NCCoE tested the set up against several test cases (ransomware attack, malware attack, user modifies a configuration file, administrator modifies a user’s file, database or database schema has been altered in error by an administrator or script). Additional materials can be found here.
Only 44% of healthcare providers, including hospital and health systems, conformed to protocols outlined by the NIST CSF – with scores in some cases trending backwards since 2017, CynergisTek reveals.
Healthcare providers and NIST CSF
Analysts examined nearly 300 assessments of provider facilities across the continuum, including hospitals, physician practices, ACOs and Business Associates.
The report also found that healthcare supply chain security is one of the lowest ranked areas for NIST CSF conformance. This is a critical weakness, given that COVID-19 demonstrated just how broken the healthcare supply chain really is with providers buying PPE from unvetted suppliers.
“We found healthcare organizations continue to enhance and improve their programs year-over-year. The problem is they are not investing fast enough relative to an innovative and well-resourced adversary,” said Caleb Barlow, CEO of CynergisTek.
“These issues, combined with the rapid onset of remote work, accelerated deployment of telemedicine and impending openness of EHRs and interoperability, have set us on a path where investments need to be made now to shore up America’s health system.
“However, the report isn’t all doom and gloom. Organizations that have invested in their programs and had regular risk assessments, devised a plan, addressed prioritized issues stemming from the assessments and leveraged proven strategies like hiring the right staff and evidence-based tools have seen significant improvements to their NIST CSF conformance scores.”
Bigger budgets don’t mean better security performance
The report revealed bigger healthcare institutions with bigger budgets didn’t necessarily perform better when it comes to security, and in some cases, performed worse than smaller organizations or those that invested less.
In some cases, this was a direct result of consolidation where systems directly connect to newly-acquired hospitals without first shoring up their security posture and conducting a compromise assessment.
“What our report has uncovered over recent years is that healthcare is still behind the curve on security. While healthcare’s focus on information security has increased over the last 15 years, investment is still lagging. In the age of remote working and an attack surface that has exponentially grown, simply maintaining a security status quo won’t cut it,” said David Finn, EVP of Strategic Innovation at CynergisTek.
“The good news is that issues emerging in our assessments are largely addressable. The bad news is that it is going to require investment in an industry still struggling with financial losses from COVID-19.”
Leading factors influencing performance include poor security planning and lack of organizational focus, inadequate reporting structures and funding, confusion around priorities, lack of staff and no clear plan.
Key strategies to bolster healthcare security and achieve success
Look under the hood at security and privacy amid mergers and acquisitions: For health systems planning to integrate new organizations into the fold through mergers and acquisitions, leadership should look under the hood and be more diligent when examining the organization’s security and privacy infrastructure, measures and performance.
It’s important to understand their books and revenue streams as well as their potential security risks and gaps to prevent these issues from becoming liabilities.
Make security an enterprise priority: While other sectors like finance and aerospace have treated security as an enterprise-level priority, healthcare must also make this kind of commitment.
Understanding how these risks tie to the bigger picture will help an organization that thinks it cannot afford to invest in privacy and information security risk management activities understand why making such an investment is crucial.
Hospitals and healthcare organizations should create collaborative, cross-functional task forces like enterprise response teams, which offer other business units an eye-opening look into how security and privacy touch all parts of the business including financial, HR, and more.
Money isn’t a solution: Just throwing money at a problem doesn’t work. Security leaders need to identify priorities and have a plan which leverages talent, tried and true strategies like multi-factor authentication, privileged access management and on-going staff training to truly up level their defenses and take a more holistic approach, especially when bringing on new services such as telehealth.
Accelerate the move to cloud: While healthcare has traditionally been slow to adopt the cloud, these solutions provide the agility and scalability that can help leaders cope with situations like COVID-19, and other crises more effectively.
Shore up security posture: We frequently learn the hard way that security can disrupt workflow. COVID-19 taught us that workflow can also disrupt security and things are going to get worse before getting better. Get an assessment quickly to determine immediate needs and coming up with a game plan to bolster defenses needed in this next normal.
Researchers at the National Institute of Standards and Technology (NIST) have developed a new method called the Phish Scale that could help organizations better train their employees to avoid phishing.
How does Phish Scale work?
Many organizations have phishing training programs in which employees receive fake phishing emails generated by the employees’ own organization to teach them to be vigilant and to recognize the characteristics of actual phishing emails.
CISOs, who often oversee these phishing awareness programs, then look at the click rates, or how often users click on the emails, to determine if their phishing training is working. Higher click rates are generally seen as bad because it means users failed to notice the email was a phish, while low click rates are often seen as good.
However, numbers alone don’t tell the whole story. “The Phish Scale is intended to help provide a deeper understanding of whether a particular phishing email is harder or easier for a particular target audience to detect,” said NIST researcher Michelle Steves. The tool can help explain why click rates are high or low.
The Phish Scale uses a rating system that is based on the message content in a phishing email. This can consist of cues that should tip users off about the legitimacy of the email and the premise of the scenario for the target audience, meaning whichever tactics the email uses would be effective for that audience. These groups can vary widely, including universities, business institutions, hospitals and government agencies.
The new method uses five elements that are rated on a 5-point scale that relate to the scenario’s premise. The overall score is then used by the phishing trainer to help analyze their data and rank the phishing exercise as low, medium or high difficulty.
The significance of the Phish Scale is to give CISOs a better understanding of their click-rate data instead of relying on the numbers alone. A low click rate for a particular phishing email can have several causes: the phishing training emails are too easy or do not provide relevant context to the user, or the phishing email is similar to a previous exercise. Data like this can create a false sense of security if click rates are analyzed on their own without understanding the phishing email’s difficulty.
Helping CISOs better understand their phishing training programs
By using the Phish Scale to analyze click rates and collecting feedback from users on why they clicked on certain phishing emails, CISOs can better understand their phishing training programs, especially if they are optimized for the intended target audience.
The Phish Scale is the culmination of years of research, and the data used for it comes from an “operational” setting, very much the opposite of a laboratory experiment with controlled variables.
“As soon as you put people into a laboratory setting, they know,” said Steves. “They’re outside of their regular context, their regular work setting, and their regular work responsibilities. That is artificial already. Our data did not come from there.”
This type of operational data is both beneficial and in short supply in the research field. “We were very fortunate that we were able to publish that data and contribute to the literature in that way,” said NIST researcher Kristen Greene.
As for next steps, Greene and Steves say they need even more data. All of the data used for the Phish Scale came from NIST. The next step is to expand the pool and acquire data from other organizations, including nongovernmental ones, and to make sure the Phish Scale performs as it should over time and in different operational settings.
“We know that the phishing threat landscape continues to change,” said Greene. “Does the Phish Scale hold up against all the new phishing attacks? How can we improve it with new data?” NIST researcher Shaneé Dawkins and her colleagues are now working to make those improvements and revisions.
More on NIST’s Post-Quantum Cryptography
Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.
Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.
NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.
Sidebar photo of Bruce Schneier by Joe MacInnis.
A number of organizations face shortcomings in monitoring and securing their cloud environments, according to a Tripwire survey of 310 security professionals.
76% of security professionals state they have difficulty maintaining security configurations in the cloud, and 37% said their risk management capabilities in the cloud are worse compared with other parts of their environment. 93% are concerned about human error accidentally exposing their cloud data.
Few orgs assessing overall cloud security posture in real time
Attackers are known to run automated searches to find sensitive data exposed in the cloud, making it critical for organizations to monitor their cloud security posture on a recurring basis and fix issues immediately.
However, the report found that only 21% of organizations assess their overall cloud security posture in real time or near real time. While 21% said they conduct weekly evaluations, 58% do so only monthly or less frequently. Despite widespread worry about human errors, 22% still assess their cloud security posture manually.
“Security teams are dealing with much more complex environments, and it can be extremely difficult to stay on top of the growing cloud footprint without having the right strategy and resources in place,” said Tim Erlin, VP of product management and strategy at Tripwire.
“Fortunately, there are well-established frameworks, such as CIS benchmarks, which provide prioritized recommendations for securing the cloud. However, the ongoing work of maintaining proper security controls often goes undone or puts too much strain on resources, leading to human error.”
Utilizing a framework to secure the cloud
Most organizations utilize a framework for securing their cloud environments – CIS and NIST being two of the most popular – but only 22% said they are able to maintain continuous cloud security compliance over time.
While 91% of organizations have implemented some level of automated enforcement in the cloud, 92% still want to increase their level of automated enforcement.
Additional survey findings show that automation levels varied across cloud security best practices:
- Only 51% have automated solutions that ensure proper encryption settings are enabled for databases or storage buckets.
- 45% automatically assess new cloud assets as they are added to the environment.
- 51% have automated alerts with context for suspicious behavior.
Now that so many of us are covering our faces to help reduce the spread of COVID-19, how well do face recognition algorithms identify people wearing masks? The answer, according to a preliminary study by the National Institute of Standards and Technology (NIST), is with great difficulty.
Identify people wearing masks using facial recognition algorithms
Even the best of the 89 commercial facial recognition algorithms tested had error rates between 5% and 50% in matching digitally applied face masks with photos of the same person without a mask.
The results were published as a NIST Interagency Report (NISTIR 8311), the first in a planned series from NIST’s Face Recognition Vendor Test (FRVT) program on the performance of face recognition algorithms on faces partially covered by protective masks.
“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said Mei Ngan, a NIST computer scientist and an author of the report. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”
How algorithms perform “one-to-one” matching
The NIST team explored how well each of the algorithms was able to perform “one-to-one” matching, where a photo is compared with a different photo of the same person. The function is commonly used for verification such as unlocking a smartphone or checking a passport. The team tested the algorithms on a set of about 6 million photos used in previous FRVT studies. (The team did not test the algorithms’ ability to perform “one-to-many” matching, used to determine whether a person in a photo matches any in a database of known images).
The research team digitally applied mask shapes to the original photos and tested the algorithms’ performance. Because real-world masks differ, the team came up with nine mask variants, which included differences in shape, color and nose coverage. The digital masks were black or a light blue that is approximately the same color as a blue surgical mask.
The shapes included round masks that cover the nose and mouth and a larger type as wide as the wearer’s face. These wider masks had high, medium and low variants that covered the nose to different degrees. The team then compared the results to the performance of the algorithms on unmasked faces.
“We can draw a few broad conclusions from the results, but there are caveats,” Ngan said. “None of these algorithms were designed to handle face masks, and the masks we used are digital creations, not the real thing.”
Comparing the performance of the tested algorithms
If these limitations are kept firmly in mind, Ngan said, the study provides a few general lessons when comparing the performance of the tested algorithms on masked faces versus unmasked ones.
Algorithm accuracy with masked faces declined substantially across the board. Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3% of the time. Masked images raised even these top algorithms’ failure rate to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.
Masked images more frequently caused algorithms to be unable to process a face, technically termed “failure to enroll or template” (FTE). Face recognition algorithms typically work by measuring a face’s features — their size and distance from one another, for example — and then comparing these measurements to those from another photo. An FTE means the algorithm could not extract a face’s features well enough to make an effective comparison in the first place.
The more of the nose a mask covers, the lower the algorithm’s accuracy. The study explored three levels of nose coverage — low, medium and high — finding that accuracy degrades with greater nose coverage.
While false negatives increased, false positives remained stable or modestly declined. Errors in face recognition can take the form of either a “false negative,” where the algorithm fails to match two photos of the same person, or a “false positive,” where it incorrectly indicates a match between photos of two different people. The modest decline in false positive rates show that occlusion with masks does not undermine this aspect of security.
The shape and color of a mask matters. Algorithm error rates were generally lower with round masks. Black masks also degraded algorithm performance in comparison to surgical blue ones, though because of time and resource constraints the team was not able to test the effect of color completely.
The race to protect sensitive electronic information against the threat of quantum computers has entered the home stretch.
Post-quantum cryptography standard
After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15.
NIST has now begun the third round of public review. This “selection round” will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.
“At the end of this round, we will choose some algorithms and standardize them,” said NIST mathematician Dustin Moody. “We intend to give people tools that are capable of protecting sensitive information for the foreseeable future, including after the advent of powerful quantum computers.”
“We request that cryptographic experts everywhere focus their attention on these last algorithms,” Moody said. “We want the algorithms we eventually select to be as strong as possible.”
Classical computers have many strengths, but issues remain
Classical computers have many strengths, but they find some problems intractable — such as quickly factoring large numbers. Current cryptographic systems exploit this difficulty to protect the details of online bank transactions and other sensitive information.
Quantum computers could solve many of these previously intractable problems easily, and while the technology remains in its infancy, it will be able to defeat many current cryptosystems as it matures.
Because the future capabilities of quantum computers remain an open question, the NIST team has taken a variety of mathematical approaches to safeguard encryption. The previous round’s group of 26 candidate algorithms were built on ideas that largely fell into three different families of mathematical approaches.
“Of the 15 that made the cut, 12 are from these three families, with the remaining three algorithms based on other approaches,” Moody said. “It’s important for the eventual standard to offer multiple avenues to encryption, in case somebody manages to break one of them down the road.”
New standard to specify one or more quantum-resistant algorithms
Cryptographic algorithms protect information in many ways, for example by creating digital signatures that certify an electronic document’s authenticity.
The new standard will specify one or more quantum-resistant algorithms each for digital signatures, public-key encryption and the generation of cryptographic keys, augmenting those in FIPS 186-4, Special Publication (SP) 800-56A Revision 3 and SP 800-56B Revision 2, respectively.
For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.
“We’re calling these seven the finalists,” Moody said. “For the most part, they’re general-purpose algorithms that we think could find wide application and be ready to go after the third round.”
The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard.
Future consideration of more recently developed ideas
Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.
“The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures,” he said.
“But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we’ll find a way to look at newer approaches too.”
Because of potential delays due to the COVID-19 pandemic, the third round has a looser schedule than past rounds. Moody said the review period will last about a year, after which NIST will issue a deadline to return comments for a few months afterward.
Following this roughly 18-month period, NIST will plan to release the initial standard for quantum-resistant cryptography in 2022.
Researchers at the National Institute of Standards and Technology (NIST) have developed a mathematical formula that, computer simulations suggest, could help 5G and other wireless networks select and share communications frequencies about 5,000 times more efficiently than trial-and-error methods. NIST engineer Jason Coder makes mathematical calculations for a machine learning formula that may help 5G and other wireless networks select and share communications frequencies efficiently The novel formula is a form of machine learning that … More
The post A math formula could help 5G networks efficiently share communications frequencies appeared first on Help Net Security.
Cyberthreats are a ubiquitous concern for organizations operating in the digital world. No company is immune — even large and high-profile organizations like Adobe, Yahoo, LinkedIn, Equifax and others have reported massive data breaches in recent years. Cyberattacks are only growing in frequency, affecting billions of people and threatening businesses.
What’s being done to bolster information security as cyberattacks continue to happen? The National Institute of Standards and Technology (NIST), a non-regulatory agency of the U.S. Department of Commerce, has been at the forefront of guiding cryptographic security programs and standards for more than 20 years. NIST morphed from its original name — the National Bureau of Standards that began at the turn of the 20th century — into its current iteration as the mobile revolution began to take off in the mid ‘90s.
To contend with cyberattacks in the early days, NIST released the Cryptographic Module Validation Program (CMVP) to certify cryptographic modules and the FIPS 140-1 protocol that independent labs use to test cryptographic modules. The program and protocol were right for the time, but times have changed and new validation, testing and certification programs and protocols are needed to keep pace with the proliferation and advancement of technology, as well as growing threats.
A new cryptographic validation protocol
On June 30, 2020, CMVP will be sunsetted and replaced with the Automated Cryptographic Validation Protocol (ACVP). ACVP has been operational since January 2019 and it will become the only protocol available come July 1, 2020.
The ACVP is what the industry needs to secure information in our highly digital world. As the volume of algorithm certification requests continue to soar, NIST’s limited resources couldn’t keep up. ACVP enables testing of cryptographic modules over the internet with a remote testing system. For these purposes, NIST has provided a server to produce test vectors, validate responses and issue certificates. This automation will bring speed and confidence to the cryptographic module validation process.
A timely FIPS update
In conjunction with the ACVP, the current FIPS 140-2 protocol will also be updated to reflect the growing types of technologies that need to be validated — software, hardware, firmware and hybrid systems.
The new FIPS 140-3 standard will be released in September 2020, laying out the security requirements for validating cryptographic modules during the design, implementation and operational deployment phases. FIPS 140-2 only provided security requirements that need to be met once a module is finalized.
FIPS 140-3 implementation schedule. Source: NIST.
The new FIPS 140-3 standard is needed to address issues that didn’t exist 20 years ago when the initial FIPS standard was developed. The updated standard takes into consideration software/firmware security, non-invasive security, sensitive security parameter management and life cycle assurance. In addition, FIPS 140-3 is aligned with the international ISO standard for cryptographic module testing.
Stronger information security with modern crypto standards
All organizations today harbor fears of a potential cyberattack. It’s unavoidable in our digital-centric world that attracts bad actors across the globe who attempt to profit from stolen data. As cyber threats and technology innovation continue to grow, organizations need systems and software that provide better assurance that cyberattacks will be kept at bay.
The shift to ACVP and FIPS 140-3 for testing, validating and certifying cryptographic algorithms and modules is the way forward. With these new cryptographic solutions, organizations will be better prepared to rise to the challenges of the 21st century world.
RSA Conference 2020 is underway at the Moscone Center in San Francisco. Check out our microsite for the conference for all the most important news.
Here are a few photos from the event, featured vendors and organizations include: Shujinko, Build38, Styra, TrueFort, Menlo Security, NETSCOUT | Arbor, SkySync, NIST Cybersecurity, Centrify, Teramind.
Criminals sometimes damage their mobile phones in an attempt to destroy data. They might smash, shoot, submerge or cook their phones, but forensics experts can often retrieve the evidence anyway. Now, researchers at the National Institute of Standards and Technology (NIST) have tested how well these forensic methods work.
NIST computer scientist Jenise Reyes-Rodriguez holds a mobile phone that has been damaged by gunfire
Accessing the phone’s memory chips
A damaged phone might not power on, and the data port might not work, so experts use hardware and software tools to directly access the phone’s memory chips. These include hacking tools, albeit ones that may be lawfully used as part of a criminal investigation.
Because these methods produce data that might be presented as evidence in court, it’s important to know if they can be trusted.
“Our goal was to test the validity of these methods,” said Rick Ayers, the NIST digital forensics expert who led the study. “Do they reliably produce accurate results?”
To conduct the study, researchers loaded data onto 10 popular models of phones. They then extracted the data or had outside experts extract the data for them. The question was: Would the extracted data exactly match the original data, without any changes?
For the study to be accurate, the researchers couldn’t just zap a bunch of data onto the phones. They had to add the data the way a person normally would. They took photos, sent messages and used Facebook, LinkedIn and other social media apps.
They entered contacts with multiple middle names and oddly formatted addresses to see if any parts would be chopped off or lost when the data was retrieved. They added GPS data by driving around town with all the phones on the dashboard.
After the researchers had loaded data onto the phones, they used two methods to extract it. The first method takes advantage of the fact that many circuit boards have small metal taps that provide access to data on the chips.
Manufacturers use those taps to test their circuit boards, but by soldering wires onto them, forensic investigators can extract data from the chips. This is called the JTAG method, for the Joint Task Action Group, the manufacturing industry association that codified this testing feature.
Chips connect to the circuit board via tiny metal pins, and the second method, called “chip-off,” involves connecting to those pins directly. Experts used to do this by gently plucking the chips off the board and seating them into chip readers, but the pins are delicate. If you damage them, getting the data can be difficult or impossible.
Digital forensics experts can often extract data from damaged mobile phones using the JTAG method
A few years ago, experts found that instead of pulling the chips off the circuit board, they could grind down the opposite side of the board on a lathe until the pins were exposed. This is like stripping insulation off a wire, and it allows access to the pins.
“It seems so obvious,” said Ayers. “But it’s one of those things where everyone just did it one way until someone came up with an easier way.”
The chip-off extractions were conducted by the Fort Worth Police Department Digital Forensics Lab and a private forensics company in Colorado called VTO Labs, who sent the extracted data back to NIST. NIST computer scientist Jenise Reyes-Rodriguez did the JTAG extractions.
After the data extractions were complete, Ayers and Reyes-Rodriguez used eight different forensic software tools to interpret the raw data, generating contacts, locations, texts, photos, social media data, and so on. They then compared those to the data originally loaded onto each phone.
The comparison showed that both JTAG and chip-off extracted the data without altering it, but that some of the software tools were better at interpreting the data than others, especially for data from social media apps. Those apps are constantly changing, making it difficult for the toolmakers to keep up.
Our data-driven society has a tricky balancing act to perform: building innovative products and services that use personal data while still protecting people’s privacy. To help organizations keep this balance, the National Institute of Standards and Technology (NIST) is offering a new tool for managing privacy risk.
Version 1.0 of the NIST Privacy Framework
The agency has just released Version 1.0 of the NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management. Developed from a draft version in collaboration with a range of stakeholders, the framework provides a useful set of privacy protection strategies for organizations that wish to improve their approach to using and protecting personal data.
The publication also provides clarification about privacy risk management concepts and the relationship between the Privacy Framework and NIST’s Cybersecurity Framework.
“Privacy is more important than ever in today’s digital age,” said Under Secretary of Commerce for Standards and Technology and NIST Director Walter G. Copan.
“The strong support the Privacy Framework’s development has already received demonstrates the critical need for tools to help organizations build products and services providing real value, while protecting people’s privacy.”
Personal data includes information about specific individuals, such as their addresses or Social Security numbers, that a company might gather and use in the normal course of business. Because this data can be used to identify the people who provide it, an organization must frequently take action to ensure it is not misused in a way that could embarrass, endanger or compromise the customers.
Helping organizations manage privacy risk
The NIST Privacy Framework is not a law or regulation, but rather a voluntary tool that can help organizations manage privacy risk arising from their products and services, as well as demonstrate compliance with laws that may affect them, such as the California Consumer Privacy Act and the European Union’s General Data Protection Regulation. It helps organizations identify the privacy outcomes they want to achieve and then prioritize the actions needed to do so.
“If you want to consider how to increase customer trust through more privacy-protective products or services, the framework can help you do that. But we designed it to be agnostic to any law, so it can assist you no matter what your goals are.”
Privacy application still evolving
Privacy as a basic right in the USA has roots in the U.S. Constitution, but its application in the digital age is still evolving, in part because technology itself is changing at a rapidly accelerating pace.
New uses for data pop up regularly, especially in the context of the internet of things and artificial intelligence, which together promise to gather and analyze patterns in the real world that previously have gone unrecognized. With these opportunities come new risks.
“A class of personal data that we consider to be of low value today may have a whole new use in a couple of years,” Lefkovitz said, “or you might have two classes of data that are not sensitive on their own, but if you put them together they suddenly may become sensitive as a unit. That’s why you need a framework for privacy risk management, not just a checklist of tasks: You need an approach that allows you to continually reevaluate and adjust to new risks.”
The Privacy Framework 1.0 has an overarching structure modeled on that of the widely used NIST Cybersecurity Framework, and the two frameworks are designed to be complementary and also updated over time.
Merely adopting a good security posture is not enough
Privacy and security are related but distinct concepts, Lefkovitz said, and merely adopting a good security posture does not necessarily mean that an organization is addressing all its privacy needs.
As with its draft version, the Privacy Framework centers on three sections: the Core, which offers a set of privacy protection activities; the Profiles, which help determine which of the activities in the Core an organization should pursue to reach its goals most effectively, and the Implementation Tiers, which help optimize the resources dedicated to managing privacy risk.
The NIST authors plan to continue building on their work to benefit the framework’s users. Digital privacy risk management is a comparatively new concept, and Lefkovitz said they received many requests for clarification about the nature of privacy risk, as well as for additional supporting resources.
“People continue to yearn for more guidance on how to do privacy risk management,” she said. “We have released a companion roadmap for the framework to point the way toward more research to address current privacy challenges, and we are building a repository of guidance resources to support implementation of the framework. We hope the community of users will contribute to it to advance privacy for the good of all.”