For better or for worse, the global COVID-19 pandemic has confined most of us to our own countries (our houses and apartments, even), has changed how and from where we do our work, and has restricted our social lives.
The distractions and tools still available to help us battle our growing anxiety and sadness are few, but some of them, such as learning new things, are very powerful. Happily for all of us, many courses and trainings that were previously available only on-site are now virtual, opening new prospects and opportunities.
Among these new offerings is HITBSecTrain, an initiative launched by the organizers of Hack in the Box security conference, which has been offering deep-knowledge technical trainings in numerous cities (including Kuala Lumpur, Singapore, Amsterdam, Dubai, Bahrain, and Beijing) since 2003.
Known for featuring specialized security courses, HITB has worked with nearly 100 trainers across the years to offer cool, atypical trainings for security folks looking to hone their skills.
Now, in response to constant feedback from trainees who asked that HITB offer more specialized topics, more subject matter experts, more often in the year, they’ve set up HITBSecTrain, which will offer HITB trainings on a monthly basis instead of just during HITB conference events.
In October, the courses on offer taught attendees about big data analytics, malware reverse-engineering and threat hunting, bug hunting and cloud security.
In November, to coincide with the virtual edition of HITBCyberWeek 2020, 10 deep-knowledge technical trainings are being offered, covering topics such as: 5G security awareness, practical malware analysis and memory forensics, mobile hacking, secure coding and DevSecOps, applied data science and machine learning for cybersecurity, and more.
For now, while courses run virtual, classes are via livestream, with virtual lab environments and structured through a learning management system. All trainees will receive digital certs corresponding to their course choice, with additional badges awarded for completion of practical tests and quizzes.
With the new virtual format, HITB trainers are incorporating more interactive quizzes, collective exercises and practical assessments into their courses that will help trainees engage better with the content and with each other. This will also help to understand better whether trainees have effectively gained the skills they sought from their course.
What is confidential computing? Can it strengthen enterprise security? Sam Lugani, Lead Security PMM, Google Workspace & GCP, answers these and other questions in this Help Net Security interview.
How does confidential computing enhance the overall security of a complex enterprise architecture?
We’ve all heard about encryption in-transit and at-rest, but as organizations prepare to move their workloads to the cloud, one of the biggest challenges they face is how to process sensitive data while still keeping it private. However, when data is being processed, there hasn’t been an easy solution to keep it encrypted.
Confidential computing is a breakthrough technology which encrypts data in-use – while it is being processed. It creates a future where private and encrypted services become the cloud standard.
At Google Cloud, we believe this transformational technology will help instill confidence that customer data is not being exposed to cloud providers or susceptible to insider risks.
Confidential computing has moved from research projects into worldwide deployed solutions. What are the prerequisites for delivering confidential computing across both on-prem and cloud environments?
Running workloads confidentially will differ based on what services and tools you use, but one thing is given – organizations don’t want to compromise on usability and performance, at the cost of security.
Those running Google Cloud can seamlessly take advantage of the products in our portfolio, Confidential VMs and Confidential GKE Nodes.
All customer workloads that run in VMs or containers today, can run as a confidential without significant performance impact. The best part is that we have worked hard to simplify the complexity. One checkbox—it’s that simple.
What type of investments does confidential computing require? What technologies and techniques are involved?
To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors.
To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology.
Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic.
How is confidential computing helping large organizations with a massive work-from-home movement?
As we entered the first few months of dealing with COVID-19, many organizations expected a slowdown in their digital strategy. Instead, we saw the opposite – most customers accelerated their use of cloud-based services. Today, enterprises have to manage a new normal which includes a distributed workforce and new digital strategies.
With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.
How do you see the work of the Confidential Computing Consortium evolving in the near future?
Cloud providers, hardware manufacturers, and software vendors all need to work together to define standards to advance confidential computing. As the technology garners more interest, sustained industry collaboration such as the Consortium will be key to helping realize the true potential of confidential computing.
As the Information Age slowly gives way to the Fourth Industrial Revolution, and the rise of IoT and IIoT, on-demand availability of computer system resources, big data and analytics, and cyber attacks aimed at business environments impact on our everyday lives, there’s an increasing need for knowledgeable cybersecurity professionals and, unfortunately, an increasing cybersecurity workforce skills gap.
The cybersecurity skills gap is huge
A year ago, (ISC)² estimated that the global cybersecurity workforce numbered 2.8 million professionals, when there’s an actual need for 4.07 million.
According to a recent global study of cybersecurity professionals by the Information Systems Security Association (ISSA) and analyst firm Enterprise Strategy Group (ESG), there has been no significant progress towards a solution to this problem in the last four years.
“What’s needed is a holistic approach of continuous cybersecurity education, where each stakeholder needs to play a role versus operating in silos,” ISSA and ESG stated.
Those starting their career in cybersecurity need many years to develop real cybersecurity proficiency, the respondents agreed. They need cybersecurity certifications and hands-on experience (i.e., jobs) and, ideally, a career plan and guidance.
Continuous cybersecurity training and education are key
Aside from the core cybersecurity talent pool, new job recruits are new graduates from universities, consultants/contractors, employees at other departments within an organization, security/hardware vendors and career changers.
One thing they all have in common is the need for constant additional training, as technology advances and changes and attackers evolve their tactics, techniques and procedures.
Though most IT and security professionals use their own free time to improve their cyber skills, they must learn on the job and get effective support from their employers for their continued career development.
Times are tough – there’s no doubt of that – but organizations must continue to invest in their employee’s career and skills development if they want to retain their current cybersecurity talent, develop it, and attract new, capable employees.
“The pandemic has shown us just how critical cybersecurity is to the successful operation of our respective economies and our individual lifestyles,” noted Deshini Newman, Managing Director EMEA, (ISC)².
Certifications show employers that cybersecurity professionals have the knowledge and skills required for the job, but also indicate that they are invested in keeping pace with a myriad of evolving issues.
“Maintaining a cybersecurity certification, combined with professional membership is evidence that professionals are constantly improving and developing new skills to add value to the profession and taking ownership for their careers. This new knowledge and understanding can be shared throughout an organisation to support security best practice, as well as ensuring cyber safety in our homes and communities,” she pointed out.
Misconfigured or unsecured databases exposed on the open web are a fact of life. We hear about some of them because security researchers tell us how they discovered them, pinpointed their owners and alerted them, but many others are found by attackers first.
It used to take months to scan the Internet looking for open systems, but attackers now have access to free and easy-to-use scanning tools that can find them in less than an hour.
“There’s no way to leave unsecured data online without opening the data up to attack. This is why it’s crucial to always enable security and authentication features when setting up databases, so that your organization avoids this risk altogether.”
What do attackers do with exposed databases?
Bressers has been involved in the security of products and projects – especially open-source – for a very long time. In the past two decades, he created the product security division at Progeny Linux Systems and worked as a manager of the Red Hat product security team and headed the security strategy in Red Hat’s Platform Business Unit.
He now manages bug bounties, penetration testing and security vulnerability programs for Elastic’s products, as well as the company’s efforts to improve application security, add new and improve existing security features as needed or requested by customers.
The problem with exposed Elasticsearch (MariaDB, MongoDB, etc.) databases, he says, is that they are often left unsecured by developers by mistake and companies don’t discover the exposure quickly.
“The scanning tools do most of the work, so it’s up to the attacker to decide if the database has any data worth stealing,” he noted, and pointed out that this isn’t hacking, exactly – it’s mining of open services.
Attackers can quickly exfiltrate the accessible data, hold it for ransom, sell it to the highest bidder, modify it or simply delete it all.
“Sometimes there’s no clear advantage or motive. For example, this summer saw a string of cyberattacks called the Meow Bot attacks that have affected at least 25,000 databases so far. The attacker replaced the contents of every afflicted database with the word ‘meow’ but has not been identified or revealed anything behind the purpose of the attack,” he explained.
Advice for organizations that use clustered databases
Open-source database platforms such as Elasticsearch have built-in security to prevent attacks of this nature, but developers often disable those features in haste or due to a lack of understanding that their actions can put customer data at risk, Bressers says.
“The most important thing to keep in mind when trying to secure data is having a clear understanding of what you are securing and what it means to your organization. How sensitive is the data? What level of security needs to be applied? Who should have access?” he explained.
“Sometimes working with a partner who is an expert at running a modern database is a more secure alternative than doing it yourself. Sometimes it’s not. Modern data management is a new problem for many organizations; make sure your people understand the opportunities and challenges. And most importantly, make sure they have the tools and training.”
Secondly, he says, companies should set up external scanning systems that continuously check for exposed databases.
“These may be the same tools used by attackers, but they immediately notify security teams when a developer has mistakenly left sensitive data unlocked. For example, a free scanner is available from Shadowserver.”
Elastic offers information and documentation on how to enable the security features of Elasticsearch databases and prevent exposure, he adds and points out that security is enabled by default in their Elasticsearch Service on Elastic Cloud and cannot be disabled.
Defense in depth
No organization will ever be 100% safe, but steps can be taken to decrease a company’s attack surface. “Defense in depth” is the name of the game, Bressers says, and in this case, it should include the following security layers:
- Discovery of data exposure (using the previously mentioned external scanning systems)
- Strong authentication (SSO or usernames/passwords)
- Prioritization of data access (e.g., HR may only need access to employee information and the accounting department may only need access to budget and tax data)
- Deployment of monitoring infrastructures and automated solutions that can quickly identify potential problems before they become emergencies, isolate infected databases, and flag to support and IT teams for next steps
He also advises organizations that don’t have the internal expertise to set security configurations and managing a clustered database to hire of service providers that can handle data management and have a strong security portfolio, and to always have a mitigation plan in place and rehearse it with their IT and security teams so that when something does happen, they can execute a swift and intentional response.
After five months in beta, the GitHub Code Scanning security feature has been made generally available to all users: for free for public repositories, as a paid option for private ones.
“So much of the world’s development happens on GitHub that security is not just an opportunity for us, but our responsibility. To secure software at scale, we need to make a base-level impact that can drive the most change; and that starts with the code,” Grey Baker, GitHub’s Senior Director of Product Management, told Help Net Security.
“Everything we’ve built previously was about responding to security incidents (dependency scanning, secret scanning, Dependabot) — reacting in real time, quickly. Our future state is about fundamentally preventing vulnerabilities from ever happening, by moving security core into the developer workflow.”
GitHub Code Scanning
The Code Scanning feature is powered by CodeQL, a powerful static analysis engine built by Semmle, which was acquired by GitHub in September 2019.
“We want developers to be able to use their tools of choice, for any of their projects on GitHub, all within the native GitHub experience they love. We’ve partnered with more than a dozen open source and commercial security vendors to date and we’ll continue to integrate code scanning with other third-party vendors through GitHub Actions and Apps,” Baker noted.
“The major value add here is that developers can work, and stay within, the code development ecosystem in which they’re most accustomed to while using their preferred scanning tools,” explained James Brotsos, Senior Solutions Engineer at Checkmarx.
“GitHub is an immensely popular resource for developers, so having something that ensures the security of code without hindering agility is critical. Our ability to automate SAST and SCA scans directly within GitHub repos simplifies workflows and removes tedious steps for the development cycle that can traditionally stand in the way of achieving DevSecOps.”
Checkmarx’s SCA (software composition analysis) help developers discover and remedy vulnerabilities within open source components that are being included into the application and prioritizing them accordingly based on severity. Checkmarx SAST (static application security testing) scans proprietary code bases – even uncompiled – to detect new and existing vulnerabilities.
“This is all done in an automated fashion, so as soon as a pull request takes place, a scan is triggered, and results are embedded directly into GitHub. Together, these integrations paint a holistic picture of the entire application’s security posture to ensure all potential gaps are accounted for,” Brotsos added.
Leon Juranic, CTO at DefenseCode, said that they are very excited by this initiative, as it provides access to security analysis to over 50+ million Github users.
“Having the security analysis results displayed as code scanning alerts in GitHub provides an convenient way to triage and prioritize fixes, a process that could be cumbersome usually requiring scrolling through many pages of exported reports, going back and forth between your code and the reported results, or reviewing them in dashboards provided by the security tool. The ease of use now means you can initiate scans, view, fix, and close alerts for potential vulnerabilities in your project’s code in an environment that is already familiar and where most of your other workflows are done,” he noted.
A week ago, GitHub also announced additional support for container scanning and standards and configuration scanning for infrastructure as code, with integration by 42Crunch, Accurics, Bridgecrew, Snyk, Aqua Security, and Anchore.
The benefits and future plans
“We expect code scanning to prevent thousands of vulnerabilities from ever existing, by catching them at code review time. We envisage a world with fewer software vulnerabilities because security review is an automated part of the developer workflow,” Baker explained.
“During the code scanning beta, developers fixed 72% of the security errors found by CodeQL and reported in the code scanning pull request experience. Achieving such a high fix rate is the result of years of research, as well as an integration that makes it easy to understand each result.”
Over 12,000 repositories tried code scanning during the beta, and another 7,000 have enabled it since it became generally available, he says, and the reception has been really positive, with many highlighting valuable security finds.
“We’ll continue to iterate and focus on feedback from the community, including around access control and permissions, which are of high priority to our users,” he concluded.
Manufacturing medical devices with cybersecurity firmly in mind is an endeavor that, according to Christopher Gates, an increasing number of manufacturers is trying to get right.
Healthcare delivery organizations have started demanding better security from medical device manufacturers (MDMs), he says, and many have have implemented secure procurement processes and contract language for MDMs that address the cybersecurity of the device itself, secure installation, cybersecurity support for the life of the product in the field, liability for breaches caused by a device not following current best practice, ongoing support for events in the field, and so on.
“For someone like myself who has been focused on cybersecurity at MDMs for over 12 years, this is excellent progress as it will force MDMs to take security seriously or be pushed out of the market by competitors who do take it seriously. Positive pressure from MDMs is driving cybersecurity forward more than any other activity,” he told Help Net Security.
Gates is a principal security architect at Velentium and one of the authors of the recently released Medical Device Cybersecurity for Engineers and Manufacturers, a comprehensive guide to medical device secure lifecycle management, aimed at engineers, managers, and regulatory specialists.
In this interview, he shares his knowledge regarding the cybersecurity mistakes most often made by manufacturers, on who is targeting medical devices (and why), his view on medical device cybersecurity standards and initiatives, and more.
[Answers have been edited for clarity.]
Are attackers targeting medical devices with a purpose other than to use them as a way into a healthcare organization’s network?
The easy answer to this is “yes,” since many MDMs in the medical device industry perform “competitive analysis” on their competitors’ products. It is much easier and cheaper for them to have a security researcher spend a few hours extracting an algorithm from a device for analysis than to spend months or even years of R&D work to pioneer a new algorithm from scratch.
Also, there is a large, hundreds-of-millions-of-dollars industry of companies who “re-enable” consumed medical disposables. This usually requires some fairly sophisticated reverse-engineering to return the device to its factory default condition.
Lastly, the medical device industry, when grouped together with the healthcare delivery organizations, constitutes part of critical national infrastructure. Other industries in that class (such as nuclear power plants) have experienced very directed and sophisticated attacks targeting safety backups in their facilities. These attacks seem to be initial testing of a cyber weapon that may be used later.
While these are clearly nation-state level attacks, you have to wonder if these same actors have been exploring medical devices as a way to inhibit our medical response in an emergency. I’m speculating: we have no evidence that this has happened. But then again, if it has happened there likely wouldn’t be any evidence, as we haven’t been designing medical devices and infrastructure with the ability to detect potential cybersecurity events until very recently.
What are the most often exploited vulnerabilities in medical devices?
It won’t come as a surprise to anyone in security when I say “the easiest vulnerabilities to exploit.” An attacker is going to start with the obvious ones, and then increasingly get more sophisticated. Mistakes made by developers include:
Unsecured firmware updating
I personally always start with software updates in the field, as they are so frequently implemented incorrectly. An attacker’s goal here is to gain access to the firmware with the intent of reverse-engineering it back into easily-readable source code that will yield more widely exploitable vulnerabilities (e.g., one impacting every device in the world). All firmware update methods have at least three very common potential design vulnerabilities. They are:
- Exposure of the binary executable (i.e., it isn’t encrypted)
- Corrupting the binary executable with added code (i.e., there isn’t an integrity check)
- A rollback attack which downgrades the version of firmware to a version with known exploitable vulnerabilities (there isn’t metadata conveying the version information).
Overlooking physical attacks
Physical attack can be mounted:
- Through an unsecured JTAG/SWD debugging port
- Via side-channel (power monitoring, timing, etc.) exploits to expose the values of cryptographic keys
- By sniffing internal busses, such as SPI and I2C
- Exploiting flash memory external to the microcontroller (a $20 cable can get it to dump all of its contents)
Manufacturing support left enabled
Almost every medical device needs certain functions to be available during manufacturing. These are usually for testing and calibration, and none of them should be functional once the device is fully deployed. Manufacturing commands are frequently documented in PDF files used for maintenance, and often only have minor changes across product/model lines inside the same manufacturer, so a little experimentation goes a long way in letting an attacker get access to all kinds of unintended functionality.
No communication authentication
Just because a communications medium connects two devices doesn’t mean that the device being connected to is the device that the manufacturer or end-user expects it to be. No communications medium is inherently secure; it’s what you do at the application level that makes it secure.
Bluetooth Low Energy (BLE) is an excellent example of this. Immediately following a pairing (or re-pairing), a device should always, always perform a challenge-response process (which utilizes cryptographic primitives) to confirm it has paired with the correct device.
I remember attending an on-stage presentation of a new class II medical device with a BLE interface. From the audience, I immediately started to explore the device with my smartphone. This device had no authentication (or authorization), so I was able to perform all operations exposed on the BLE connection. I was engrossed in this interface when I suddenly realized there was some commotion on stage as they couldn’t get their demonstration to work: I had accidentally taken over the only connection the device supported. (I then quickly terminated the connection to let them continue with the presentation.)
What things must medical device manufacturers keep in mind if they want to produce secure products?
There are many aspects to incorporating security into your development culture. These can be broadly lumped into activities that promote security in your products, versus activities that convey a false sense of security and are actually a waste of time.
Probably the most important thing that a majority of MDMs need to understand and accept is that their developers have probably never been trained in cybersecurity. Most developers have limited knowledge of how to incorporate cybersecurity into the development lifecycle, where to invest time and effort into securing a device, what artifacts are needed for premarket submission, and how to proper utilize cryptography. Without knowing the details, many managers assume that security is being adequately included somewhere in their company’s development lifecycle; most are wrong.
To produce secure products, MDMs must follow a secure “total product life cycle,” which starts on the first day of development and ends years after the product’s end of life or end of support.
They need to:
- Know the three areas where vulnerabilities are frequently introduced during development (design, implementation, and through third-party software components), and how to identify, prevent, or mitigate them
- Know how to securely transfer a device to production and securely manage it once in production
- Recognize an MDM’s place in the device’s supply chain: not at the end, but in the middle. An MDMs cybersecurity responsibilities extend up and down the chain. They have to contractually enforce cybersecurity controls on their suppliers, and they have to provide postmarket support for their devices in the field, up through and after end-of-life
- Ccreate and maintain Software Bills of Materials (SBOMs) for all products, including legacy products. Doing this work now will help them stay ahead of regulation and save them money in the long run.
They must avoid mistakes like:
- Not thinking that a medical device needs to be secured
- Assuming their development team ‘can’ and ‘is’ securing their product
- Not designing-in the ability to update the device in the field
- Assuming that all vulnerabilities can be mitigated by a field update
- Only considering the security of one aspect of your design (e.g., its wireless communication protocol). Security is a chain: for the device to be secure, all the links of the chain need to be secure. Attackers are not going to consider certain parts of the target device ‘out of bounds’ for exploiting.
Ultimately, security is about protecting the business model of an MDM. This includes the device’s safety and efficacy for the patient, which is what the regulations address, but it also includes public opinion, loss of business, counterfeit accessories, theft of intellectual property, and so forth. One mistake I see companies frequently make is doing the minimum on security to gain regulatory approval, but neglecting to protect their other business interests along the way – and those can be very expensive to overlook.
What about the developers? Any advice on skills they should acquire or brush up on?
First, I’d like to take some pressure off developers by saying that it’s unreasonable to expect that they have some intrinsic knowledge of how to implement cybersecurity in a product. Until very recently, cybersecurity was not part of traditional engineering or software development curriculum. Most developers need additional training in cybersecurity.
And it’s not only the developers. More than likely, project management has done them a huge disservice by creating a system-level security requirement that says something like, “Prevent ransomware attacks.” What is the development team supposed to do with that requirement? How is it actionable?
At the same time, involving the company’s network or IT cybersecurity team is not going to be an automatic fix either. IT Cybersecurity diverges from Embedded Cybersecurity in many respects, from detection to implementation of mitigations. No MDM is going to be putting a firewall on a device that is powered by a CR2032 battery anytime soon; yet there are ways to secure such a low-resource device.
In addition to the how-to book we wrote, Velentium will soon offer training available specifically for the embedded device domain, geared toward creating a culture of cybersecurity in development teams. My audacious goal is that within 5 years every medical device developer I talk to will be able to converse intelligently on all aspects of securing a medical device.
What cybersecurity legislation/regulation must companies manufacturing medical devices abide by?
It depends on the markets you intend to sell into. While the US has had the Food and Drug Administration (FDA) refining its medical device cybersecurity position since 2005, others are more recent entrants into this type of regulation, including Japan, China, Germany, Singapore, South Korea, Australia, Canada, France, Saudi Arabia, and the greater EU.
While all of these regulations have the same goal of securing medical devices, how they get there is anything but harmonized among them. Even the level of abstraction varies, with some focused on processes while others on technical activities.
But there are some common concepts represented in all these regulations, such as:
- Risk management
- Software bill of materials (SBOM)
- “Total Product Lifecycle”
But if you plan on marketing in the US, the two most important document should be FDA’s:
- 2018 – Draft Guidance: Content of Premarket Submissions for Management of Cybersecurity in Medical Devices
- 2016 – Final Guidance: Postmarket Management of Cybersecurity in Medical Devices (The 2014 version of the guidance on premarket submissions can be largely ignored, as it no longer represents the FDA’s current expectations for cybersecurity in new medical devices).
What are some good standards for manufacturers to follow if they want to get cybersecurity right?
The Association for the Advancement of Medical Instrumentation’s standards are excellent. I recommend AAMI TIR57: 2016 and AAMI TIR97: 2019.
Also very good is the Healthcare & Public Health Sector Coordinating Council’s (HPH SCC) Joint Security Plan. And, to a lesser extent, the NIST Cyber Security Framework.
The work being done at the US Department of Commerce / NTIA on SBOM definition for vulnerability management and postmarket surveillance is very good as well, and worth following.
What initiatives exist to promote medical device cybersecurity?
Notable initiatives I’m familiar with include, first, the aforementioned NTIA work on SBOMs, now in its second year. There are also several excellent working groups at HSCC, including the Legacy Medical Device group and the Security Contract Language for Healthcare Delivery Organizations group. I’d also point to numerous working groups in the H-ISAC Information Sharing and Analysis Organization (ISAO), including the Securing the Medical Device Lifecycle group.
And I have to include the FDA itself here, which is in the process of revising its 2018 premarket draft guidance; we hope to see the results of that effort in early 2021.
What changes do you expect to see in the medical devices cybersecurity field in the next 3-5 years?
So much is happening at high and low levels. For instance, I hope to see the FDA get more of a direct mandate from Congress to enforce security in medical devices.
Also, many working groups of highly talented people are working on ways to improve the security posture of devices, such as the NTIA SBOM effort to improve the transparency of software “ingredients” in a medical device, allowing end-users to quickly assess their risk level when new vulnerabilities are discovered.
Semiconductor manufacturers continue to give us great mitigation tools in hardware, such as side-channel protections, cryptographic accelerators, virtualized security cores. Trustzone is a great example.
And at the application level, we’ll continue to see more and better packaged tools, such as cryptographic libraries and processes, to help developers avoid cryptography mistakes. Also, we’ll see more and better process tools to automate the application of security controls to a design.
HDOs and other medical device purchasers are better informed than ever before about embedded cybersecurity features and best practices. That trend will continue, and will further accelerate demand for better-secured products.
I hope to see some effort at harmonization between all the federal, state, and foreign regulations that have been recently released with those currently under consideration.
One thing is certain: legacy medical devices that can’t be secured will only go away when we can replace them with new medical devices that are secure by design. Bringing new devices to market takes a long time. There’s lots of great innovation underway, but really, we’re just getting started!
Google aims to improve security of browser engines, third-party Android devices and apps on Google Play
Researchers must also bear the costs of fuzzing in advance, even though there’s a possibility their approach may not discover any bugs or if it does, that they’ll receive a reward for finding them. This fact might deter many of them and, consequently, bugs stay unfixed and exploitable for longer.
That’s why Google is offering $5,000 research grants in the form of Google Compute Engine credits.
Helping third parties in the Android ecosystem
The company is also set on improving the security of the Android ecosystem, and to that point it’s launching the Android Partner Vulnerability Initiative (APVI).
“Until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP (Android Open Source Project) code that are unique to a much smaller set of specific Android OEMs,” the company explained.
“The APVI […] covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).”
Already discovered issues and those yet to be unearthed have been/will be shared through this bug tracker.
Simultaneously, the company has is looking for a Security Engineering Manager in Android Security that will, among other things, lead a team that “will perform application security assessments against highly sensitive, third party Android apps on Google Play, working to identify vulnerabilities and provide remediation guidance to impacted application developers.”
Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.
“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.
Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.
“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)
What do we talk about when we talk about hardware security?
Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.
Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.
“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.
Hardware security challenges
But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.
She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.
“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”
Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.
“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.
“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”
Achieving security in low-end systems on chips
The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.
As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.
Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.
“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.
“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”
For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.
“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.
Sitting in the midst of an unstable economy, a continued public health emergency, and facing an uptick in successful cyber attacks, CISOs find themselves needing to enhance their cybersecurity posture while remaining within increasingly scrutinized budgets.
Senior leadership recognizes the value of cybersecurity but understanding how to best allocate financial resources poses an issue for IT professionals and executive teams. As part of justifying a 2021 cybersecurity budget, CISOs need to focus on quick wins, cost-effective SaaS solutions, and effective ROI predictions.
Finding the “quick wins” for your 2021 cybersecurity budget
Cybersecurity, particularly with organizations suffering from technology debt, can be time-consuming. Legacy technologies, including internally designed tools, create security challenges for organizations of all sizes.
The first step to determining the “quick wins” for 2021 lies in reviewing the current IT stack for areas that have become too costly to support. For example, as workforce members moved off-premises during the current public health crisis, many organizations found that their technology debt made this shift difficult. With workers no longer accessing resources from inside the organization’s network, organizations with rigid technology stacks struggled to pivot their work models.
Going forward, remote work appears to be one way through the current health and economic crises. Even major technology leaders who traditionally relied on in-person workforces have moved to remote models through mid-2021, with Salesforce the most recent to announce this decision.
Looking for gaps in security, therefore, should be the first step in any budget analysis. As part of this gap analysis, CISOs can look in the following areas:
- VPN and data encryption
- Data and user access
- Cloud infrastructure security
Each of these areas can provide quick wins if done correctly because as organizations accelerate their digital transformation strategies to match these new workplace situations, they can now leverage cloud-native security solutions.
Adopting SaaS security solutions for accelerating security and year-over-year value
The SaaS-delivered security solution market exploded over the last five to ten years. As organizations moved their mission-critical business operations to the cloud, cybercriminals focused their activities on these resources.
Interestingly, a CNBC article from July 14, 2020 noted that for the first half of 2020, the number of reported data breaches dropped by 33%. Meanwhile, another CNBC article from July 29, 2020 notes that during the first quarter, large scale data breaches increased by 273% compared to the same time period in 2019. Although the data appears conflicting, the Identity Theft Research Center research that informed the July 14th article specifically notes, “This is not expected to be a long-term trend as threat actors are likely to return to more traditional attack patterns to replace and update identity information needed to commit future identity and financial crimes.” In short, rapidly closing security gaps as part of a 2021 cybersecurity budget plan needs to include the fast wins that SaaS-delivered solutions provide.
SaaS security solutions offer two distinct budget wins for CISOs. First, they offer rapid integration into the organization’s IT stack. In some cases, CISOs can get a SaaS tool deployed within a few weeks, in other cases within a few months. Deployment time depends on the complexity of the problem being solved, the type of integrations necessary, and the enterprise’s size. However, in the same way that agile organizations leverage cloud-based business applications, security teams can leverage rapid deployment of cloud-based security solutions.
The second value that SaaS security solutions offer is YoY savings. Subscription models offer budget conscious organizations several distinct value propositions. First, the organization can reduce hardware maintenance costs, including operational costs, upgrade costs, software costs, and servicing costs. Second, SaaS solutions often enable companies to focus on their highest risk assets and then increase their usage in the future. Third, they allow organizations to pivot more effectively because the reduced up-front capital outlay reduces the commitment to the project.
Applying a dollar value to these during the budget justification process might feel difficult, but the right key performance indicators (KPIs) can help establish baseline cost savings estimates.
Choosing the KPIs for effective ROI predictions
During an economic downturn, justifying the cybersecurity budget requests might be increasingly difficult. Most cybersecurity ROI predictions rely on risk evaluations and applying probability of a data breach to projected cost of a data breach. As organizations look to reduce costs to maintain financially viable, a “what if” approach may not be as appealing.
However, as part of budgeting, CISOs can look to several value propositions to bolster their spending. Cybersecurity initiatives focus on leveraging resources effectively so that they can ensure the most streamlined process possible while maintaining a robust security program. Aligning purchase KPIs with specific reduced operational costs can help gain buy-in for the solution.
A quick hypothetical can walk through the overarching value of SaaS-based security spending. Continuous monitoring for external facing vulnerabilities is time-consuming and often incorporates inefficiency. Hypothetical numbers based on research indicate:
A poll of C-level security executives noted that 37% said they received more than 10,000 alerts each month with 52% of those alerts identified as false positives.
- The average security analyst spends ten minutes responding to a single alert.
- The average security analyst makes approximately $91,000 per year.
Bringing this data together shows the value of SaaS-based solutions that reduce the number of false positives:
- Every month enterprise security analysts spend 10 minutes for each of the 5,2000 false positives.
- This equates to approximately 866 hours.
- 866 hours, assuming a 40-hour week, is 21.65 weeks.
- Assuming 4 weeks per month, the enterprise needs at least 5 security analysts to manage false positive responses.
- These 5 security analysts cost a total of $455,000 per year in salary, not including bonuses and other benefits.
Although CISOs may not want to reduce their number of team members, they may not want to add additional ones, or they may be seeking to optimize the team they have. Tracking KPIs such reduction in false positives per month can provide the type of long-term cost value necessary for other senior executives and the board of directors.
Securing a 2021 cybersecurity budget
While the number of attacks may have stalled during 2020, cybercriminals have not stopped targeting enterprise data. Phishing attacks and malware attacks have moved away from the enterprise network level and now look to infiltrate end-user devices. As organizations continue to pivot their operating models, they need to look for cost-effective ways to secure their sensitive resources and data. However, budget constrictions arising from 2020’s economic instability may make it difficult for CISOs to gain the requisite dollars to continue to apply best security practices.
As organizations start looking toward their 2021 roadmap, CISOs will increasingly need to be specific about not only the costs associated with purchases but also the cost savings that those purchases provide from both data incident risk and operational cost perspective.
The COVID-19 pandemic took most of us by surprise. Widespread shelter-in-place mandates changed how we work (and whether we can work), play, rest, shop, communicate and learn.
It changed things for businesses as well. Some were not ready to meet the challenge and closed up shop, many others were forced to hastily start or speed up their company’s existing digital transformation efforts and prepare for the majority of their workforce to be working from home – something that seemed impossible (or simply very, very unlikely) just months before.
Time for change
In times of upheaval, it becomes easier to imagine and enact change. Unfortunately, the speed at which all these changes happened has meant that cybersecurity has become less important than productivity (meaning: even less important than it was before).
But this downgrade won’t and can’t last long. With cyber attackers increasingly taking advantage of the many new attack surfaces – unsecured devices, databases, cloud assets, remote access and other accounts – organizations are now furiously trying to close as many security holes as soon as possible.
Employed cybersecurity professionals have been having a tough time during the last few months, trying to keep company assets and networks out of the hands of attackers while having to suddenly support more remote workers that ever before.
The required security measures are known and advice for achieving remote work security is easy to get, but implementing it all takes time and effort. Even before the advent of COVID-19, organizations had trouble filling all the cybersecurity positions they opened – and their needs have surely intensified in the last few months.
Gunning for a career in cybersecurity
Cybersecurity professionals and other technology professionals are using eLearning and online trainings to pick up new skills, but as the demand for cybersecurity personnel increases and the availability of paid positions widens (when in many other economic sectors is dwindling), many tech-savvy individuals are wondering: “Do I have what it takes to enter and thrive in the cybersecurity arena?”
A recent Skillsoft report says that networking and operating systems, security and programming training are in the highest demand among technology and developer professionals, and that security certification prep courses are up by 58 percent YoY.
While people already working in IT definitely have a leg up on other aspiring candidates since every role within IT has a cybersecurity aspect, certifications such as the (ISC)² Systems Security Certified Practitioner (SSCP) can help with cybersecurity knowledge acquisition and demonstrate the person’s suitability for entering the cybersecurity field.
But even recent college graduates without a deep technical background and military veterans can have a bright future in cybersecurity – if they know how to go about breaking into the field. The tools are there for those who want to use them.
COVID-19 has upended the way we do all things. In this interview, Mike Bursell, Chief Security Architect at Red Hat, shares his view of which IT security changes are ongoing and which changes enterprises should prepare for in the coming months and years.
How has the pandemic affected enterprise edge computing strategies? Has the massive shift to remote work created problems when it comes to scaling hybrid cloud environments?
The pandemic has caused major shifts in the ways we live and work, from video calls to increased use of streaming services, forcing businesses to embrace new ways to be flexible, scalable, efficient and cost-saving. It has also exposed weaknesses in the network architectures that underpin many companies, as they struggle to cope with remote working and increased traffic. We’re therefore seeing both an accelerated shift to edge computing, which takes place at or near the physical location of either the end-user or the data source, and further interest in hybrid cloud strategies which don’t require as much on-site staff time.
Changing your processes to make the most of this without damaging your security posture requires thought and, frankly, new policies and procedures. Get your legal and risk teams involved – but don’t forget your HR department. HR has a definite role to play in allowing your key employees to continue to do the job you need them to do, but in ways that are consonant with the new world we’re living in.
However, don’t assume that these will be – or should be! – short-term changes. If you can find more efficient or effective ways of managing your infrastructure, without compromising your risk profile while also satisfying new staff expectations, then everyone wins.
What would you say are the most significant challenges for enterprises that want to build secure and future-proof application infrastructures?
One challenge is that although some of the technology is now quite mature, the processes for managing it aren’t, yet. And by that I don’t just mean technical processes, but how you arrange your teams and culture to suit new ways of managing, deploying, and (critically) automating your infrastructure. Add to this new technologies such as confidential computing (using Trusted Execution Environments to protect data in use), and there is still a lot of change.
The best advice is to plan for change – technical, process and culture – but do not, whatever you do, leave security till last. It has to be front and centre of any plans you make. One concrete change that you can make immediately is taking your security people off just “fire-fighting duty”, where they have to react to crises as they come in: businesses can consider how to use them in a more proactive way.
People don’t scale, and there’s a global shortage of security experts. So, you need to use the ones that you have as effectively as you can, and, crucially, give them interesting work to do, if you plan to retain them. It’s almost guaranteed that there are ways to extend their security expertise into processes and automation which will benefit your broader teams. At the same time, you can allow those experts to start preparing for new issues that will arise, and investigating new technologies and methodologies which they can then reapply to business processes as they mature.
How has cloud-native management evolved in the last few years and what are the current security stumbling blocks?
One of the areas of both maturity and immaturity is in terms of workload isolation. We can think of three types: workload from workload isolation (preventing workloads from interfering with each other – type 1); host from workload isolation (preventing workloads from interfering with the host – type 2); workload from host isolation (preventing hosts from interfering with workloads – type 3).
The technologies for types 1 and 2 are really quite mature now, with containers and virtual machines combining a variety of hardware and software techniques such virtualization, cgroups and SELinux. On the other hand, protecting workloads from malicious or compromised hosts is much more difficult, meaning that regulators – and sensible enterprises! – are unwilling to have some workloads execute in the public cloud.
Technologies like secure and measured boot, combined with TPM capabilities by projects such as Keylime (which is fully open source) are beginning to address this, and we can expect major improvement as confidential computing (and open source projects like Enarx which uses TEEs) matures.
In the past few years, we’ve seen a huge interest in Kubernetes deployments. What common mistakes are organizations making along the way? How can they be addressed?
One of the main mistakes we see businesses make is attempting to deploy Kubernetes without the appropriate level of in house expertise. Kubernetes is an ecosystem, rather than a one-off executable, that relies on other services provided by open source projects. It requires IT teams to fully understand the architecture that is made up of applications and network layers.
Once implemented, businesses must also maintain the ecosystem in parallel to any software running on top. When it comes to implementation, businesses are advised to follow open standards – those decided upon by the open source Kubernetes community as a whole, rather than a specific vendor. This will prevent teams from running into unexpected roadblocks, and helps to ensure a smooth learning curve for new team members.
Another mistake organizations can make is ignoring small but important details, like the backwards compatibility of Kubernetes with older versions is very important. It’s easy to overlook the fact that these may not have important security updates that can transfer, so IT teams must be mindful when merging code across versions, and check regularly for available updates.
Open source remains one of the building blocks of enterprise IT. What’s your take on the future of open source code in large business networks?
Open source is here to stay, and that’s a good thing, not least for security. The more security experts there are to look at code, the more likely that bugs will be found and fixed. Of course, security experts are short on the ground, and busy, so it’s important that large enterprises make a commitment to getting involved with open source and committing resources to it.
Another issue that people also get confused by thinking that just because a project is open source, it’s ready to use. There’s a difference between an open source project and an enterprise product which is based on that project. In the latter case, you get all the benefits of testing, patching, upgrading, vulnerability processes, version management and support. In the former case, you need to manage everything yourself – including ensuring that you have sufficient expertise in house to cope with any issues that come up.
Recent research shows almost three quarters of large businesses believe remote working policies introduced to help stop the spread of COVID-19 are making their companies more vulnerable to cyberattacks. New attack vectors for opportunistic cyber attackers – and new challenges for network administrators have been introduced.
To select a suitable remote workforce protection solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Vince Berk, VP, Chief Architect Security, Riverbed
A business needs to meet three main realizations or criteria for a remote workforce protection solution to be effective:
Use of SaaS, where access to the traffic in traditional ways becomes challenging: understanding where data lives, and who accesses it, and controlling this access, is the minimum bar to pass in an environment where packets are not available or the connection cannot be intercepted.
Recognition that users use a multitude of devices, from laptops, iPads, phones—many of which are not owned or controlled by the enterprise: can identity be established definitively, can data access be controlled effecitvely, and forensically accurately monitored for compromise at the cloud/datacenter end?
When security becomes ‘too invasive’, workers create out-of-band business processes and “shadow IT,” which are a major blind spot as well as a potential risk surface as company private information ends up outside of the control of the organization: does the solution provide a way to discover and potentially control use of this modern shadow IT.
A comprehensive security solution for remote work must acknowledge the novel problems these new trends bring and succeed on resolving these issues for all three criteria.
Kate Bolseth, CEO, HelpSystems
One thing must be clear: your entire management team needs to assist in establishing the right infrastructure in order to facilitate a successful remote workforce environment.
Before looking at any solutions, answer the following questions:
- How are my employees accessing data?
- How are they working?
- How can we minimize the risk of data breaches or inadvertent exposure of sensitive data?
- How do we discern what data is sensitive and needs to be protected?
The answers will inform organizational planning and facilitate employee engagement while removing potential security roadblocks that might thwart workforce productivity. These guidelines must be as fluid as the extraordinary circumstances we are facing without creating unforeseen exposure to risk.
When examining solutions, any option worth considering must be able to identify and classify sensitive personal data and critical corporate information assets. The deployment of enterprise-grade security is essential to protecting the virtual workforce from security breaches via personal computers as well as at-home Wi-Fi networks and routers.
Ultimately, it’s the flow of email that remains the biggest vulnerability for most organizations, so make sure your solution examines emails and files at the point of creation to identify personal data and apply proper protection while providing the link to broader data classification.
Carolyn Crandall, Chief Deception Officer, Attivo Networks
When selecting a remote workforce protection solution, CISOs need to consider three key areas: exposed endpoints, security for Active Directory (AD) and preventing malware from spreading.
Exposed endpoints: standard anti-virus software and VPNs are no match for advanced signature-less or file-less attack techniques. EDR tools enhance detection but still leave gaps. Therefore pick an endpoint solution capable of quickly detecting endpoint lateral movement, discovery and privilege escalation.
Security for Active Directory (AD): cloud services and identity access management need protection against credential theft, privilege escalation and AD takeover. In a remote workforce context AD is often over provisioned or misconfigured. A good answer is denial technology which detects discovery behaviors and attempts at privilege escalation.
Preventing spread of malware: it is almost impossible to prevent malware passing from workforce machines reconnecting to the network. It is vital therefore to choose a resolution that uncovers lateral movement, APTs, ransomware and insider threats. Popular options include EPP/EDR, Intrusion Detection/Prevention Systems (IDS/IPS) and deception technology. When selecting, take account of native integrations and automation as well as how well the tools combine to share data and automate incident response.
In short, the answer to remote workforce protection lies in a robust, layered defence. If attackers get through one, there must be additional controls to stop them from progressing.
Daniel Döring, Technical Director Security and Strategic Alliances, Matrix42
Endpoint security requires a bundle of measures, and only companies that take all aspects into account can ensure a high level of security.
Automated malware protection: automated detection in case of anomalies and deviations is a fundamental driver for IT to be able to react quickly in case of an incident. In this way, it is often possible to fend off attacks before they even cause damage.
Device control: all devices that have access to corporate IT must be registered and secured in advance. This includes both corporate devices and private employee devices such as smartphones, tablets, or laptops. If, for example, a smartphone is lost, access to the system can be withdrawn at the click of a mouse.
App control: if, in addition to devices, all applications are centrally controlled by IT, IT risks can be further minimized. The IT department can thus control access at any time.
Encryption: the encryption of all existing data protects against the consequences of data loss.
Data protection at the technological and manual levels: automated and manual measures are combined for greater data protection. Employees must continue to be trained so that they are aware of risks. However, the secure management of data stocks can be simplified with the help of technology in such a way that error tolerance is significantly increased.
Greg Foss, Senior Cybersecurity Strategist, VMware Carbon Black
The most important aspect for any security solution is how this product is going to complement your current environment and compensate for gaps within your existing controls.
Whether you’re looking to upgrade your endpoint protections or add always-on VPN capability for the now predominately remote workforce, there are a few key considerations when it comes to deploying security software for protecting distributed assets:
- Will the solution require infrastructure to deploy, or will this be a remote cloud hosted solution? Both options come with their unique benefits and drawbacks, with cloud being optimal for disparate systems and offloading the burden of securing internet-facing services to the vendor.
- What is the footprint of the agent and are multiple agents required for the solution to be effective? Compute is expensive, agents should be as non-impactful to the system as possible.
- How will this solution improve your security team’s visibility and ability to either prevent or respond to a breach? What key gaps in coverage will this tool help rectify as cost effectively as possible.
- Will this meet the organization’s future needs, as things begin to shift back to the office?
- Lastly, ensure that you allow for the team to operationalize and integrate the platform. This takes time. Don’t bring on too many tools at once.
Matt Lock, Technical Director, Varonis
With more remote working, comes more cyberattacks. When selecting a remote workforce solution, CISO’s must ask the following questions:
Am I able to provide comprehensive visibility of cloud apps? Microsoft Teams usage exploded by 500% during the pandemic, however given its immediate enforcement, deployments were rushed with misconfigured permissions. It’s paramount to pick a solution that allows security teams to see where sensitive data is overexposed and provide visibility into how each user can access Office 365 data.
Can I confidently monitor insider threat activity? The shift to remote working has seen a spike in insider threat activity and highlighted the importance of understanding where sensitive data is, who has access to it, whose leveraging that access, and any unusual access patterns. Best practices such as implementing the principle of least privilege to confine user access to the data should also be considered.
Do I have real-time insight into anomalous behavior? Having real-time awareness of unusual VPN, DNS and web activity mustn’t be overlooked. Gaining visibility of this web activity assists security teams track and trend progress as they mitigate critical security gaps.
Selecting the right workforce protection solution will vary for different organizations depending on their priorities but the top priority of any solution must be to provide clear visibility of data across all cloud and remote environments.
Druce MacFarlane, Head of Products – Security, Threat Intelligence and Analytics, Infoblox
Enterprises investing in remote workforce security tools should consider shoring up their foundational security in a way that:
Secures corporate assets wherever they are located: backhauling traffic to a data center—for example with a VPN—can introduce latency and connectivity issues, especially when accessing cloud-based applications and services that are now essential for business operations. Look for solutions that extend the reach of your existing security stack, and leverage infrastructure you already rely on for connectivity to extend security, visibility, and control to the edge.
Optimizes your existing security stack: find a solution that works with your entire security ecosystem to cross-share threat intelligence, spot and flag suspicious activities, and automate threat response.
Offers flexible deployment: to get the most value for your spend, make sure the solution you choose can be deployed on-premises and in the cloud to offer security that cuts across your hybrid infrastructure, protecting your on-premises assets as well as your remote workforce, while allowing IT to manage the solution from anywhere.
The right solution to secure remote work should ideally enable you to scale quickly to optimize remote connections and secure corporate assets wherever they are located.
Faiz Shuja, CEO, SIRP Labs
In all the discussion around making remote working safer for employees, relatively little has been said about mechanisms governing distributed security monitoring and incident response teams working from home.
Normally, security analysts work within a SOC complete with advanced defences and tools. New special measures are needed to protect them while monitoring threats and responding to attacks from home.
Such measures include hardened machines with secure connectivity through VPNs, 2FA and jump machines. SOC teams also need to update security monitoring plans remotely.
Our advice to CISOs is to optimize security operations and monitoring platforms so that all essential cybersecurity information needed for accurate decision-making is contextualized and visible at-a-glance to a remote security analyst.
Practical measures include:
- Unify the view for distributed security analysts to monitor and respond to threats
- Ensure proper communication and escalation between security teams and across the organization through defined workflows
- Use security orchestration and automation playbooks for repetitive investigation and incident response tasks for consistency across all distributed security analysts
- Align risk matrix with evolving threat landscape
- Enhance security monitoring use cases for remote access services and remotely connected devices
One notable essential is the capacity to constantly tweak risk-levels to quickly realign priorities to optimise the detection and response effectiveness of individual security team members.
Todd Weber, CTO, Americas, Optiv Security
Selecting a remote workforce protection solution is more about scale these days than technology. Companies have been providing work-from-home solutions for several years, but not necessarily for all applications.
How granular can you get on access to applications based on certain conditions?
Simply the credentials themselves (even with multi-factor authentication) aren’t enough any longer to judge on trusted access to critical applications. Things like what device am I on, how trusted is this device, where in the world is this device, and other factors play a role, and remote access solutions need to accommodate granular access to applications based on this criteria.
Can I provide enhanced transport and access to applications with the solution?
The concept of SD-WAN is not new, but it has become more important as SaaS applications and distributed workforce have become more prevalent. Providing optimal network transport as well as a visibility point for user and data controls has become vitally important.
Does the solution provide protections for cloud SaaS applications?
Many applications are no longer hosted by companies and aren’t in the direct path of many controls. Can you deploy very granular controls within the solution that provides both visibility and access restrictions to IaaS and SaaS applications?
Traditional password-based security might be headed for extinction, but that moment is still far off.
In the meantime, most of us need something to prevent our worst instincts when it comes to choosing passwords: using personal information, predictable (e.g., sequential) keystroke patterns, password variations, well-known substitutions, single words from a dictionary and – above all – reusing the same password for many different private and enterprise accounts.
What does a modern password policy look like?
While using unique passwords for every account is a piece of advice that has withstood the test of time (though not the test of widespread compliance), people also used to be told that they should use a mix of letters, numbers and symbols and to change it every 90 days – recommendations that the evolving threat landscape has made obsolete and even somewhat harmful.
In the past decade, academic research on the topic of password practices and insights gleaned from passwords compromised in breaches have revealed what people were actually doing when they were creating passwords. This helped unseat some of the prevailing password policies that were in place for so long, Josh Horwitz, Chief Operations Officer of Enzoic, told Help Net Security.
The latest NIST-sanctioned advice regarding enterprise password policies (as delineated in NIST Special Publication 800-63B) includes, among other things, the removal of the requirement for character composition rules and for mandatory periodic password changes. Those are recommendations that are also being promulgated by Microsoft.
As data breaches now happen every single day and attackers are trying out the revealed passwords on different accounts in the hope that the user has reused them, NIST also advises companies to verify that passwords are not compromised before they are activated and check their status on an ongoing basis, against a dynamic database comprised of known compromised credentials.
The need for modern tools
But the thing is, most older password policy tools don’t provide a method to check if a password is strong and not compromised once the password is chosen/set.
There’s really only one that both checks the passwords at creation and continuously monitors their resilience to credential stuffing attacks, by checking them against a massive (7+ billion) database of compromised credentials that is updated every single day.
“Some organizations will gather this information from the dark web and other places where you can get lists of compromised passwords, but most tools aren’t designed to incorporate it and it’s still a very manual process to try to keep that information up to date. It’s effectively really hard to maintain the breadth and frequency of data updates that are required for this approach to work as it should,” Horwitz noted.
But for Enzoic, this is practically one of its core missions.
“We have people whose full-time job is to go out and gather threat intelligence, databases of compromised passwords, and cracking dictionaries. We’ve also invested substantially in proprietary technology to automate that process of collection, cleansing and indexing of that information,” he explained.
“Our database is updated multiple times each day, and we’re really getting the breadth of data out there, by integrating both large and small compromised databases in our list – because hackers will use any database they can get their hands on, not just those stolen in well-publicized data breaches.”
Enzoic for Active Directory
This constantly updated list/database is what powers Enzoic for Active Directory, a tool (plug-in) that integrates into Active Directory and enforces additional password rules to prevent users from using compromised credentials.
The solution checks the password both when it’s created and when it’s reset and checks it daily against this real-time compromised password database. Furthermore, it does so automatically, without the IT team having to do anything except set it up once.
Enzoic for AD is able to detect and prevent the use of:
- Fuzzy variations of compromised passwords
- Unsafe passwords consisting of an often-used root word and a few trailing symbols and numbers
- New passwords that are too similar to the one the user previously used
- Passwords that employees at specific organizations are expected to choose (this is accomplished by using a custom dictionary that can be tailored to each organization)
The tool uses a standard password filter object to create a new password policy that works anywhere that defers to Active Directory, including Azure AD and third-party password reset tools.
Can multi-factor authentication save us?
Many will wonder whether such a tool is really crucial for keeping AD accounts safe. “What if we also use multi-factor authentication? Doesn’t that solve our authentication problems and keeps us safe from attacks?”
In reality, password remain part in every environment, and not every authentication event includes multi-factor authentication (MFA).
“You can offer MFA, but until you actually require its use and get rid of the password, there’s always going to be doors in that the attackers can use,” Horwitz pointed out.
“NIST also makes it very clear that authentication security should include multiple layers, and that each of these layers – including the password layer – need to be hardened.”
Do you really need Enzoic for Active Directory?
Enzoic has made it easy for enterprises to check whether some of the AD passwords used by their employees are weak or have been compromised: they can deploy a free password auditing tool (Enzoic for Active Directory Lite) to take a quick snapshot of their domain’s password security state.
“Some password auditing tools take long time to try to brute-force passwords, but attackers are much more likely to start their efforts with compromised passwords,” Horwitz added.
“Our tool takes just minutes to perform the audit, it’s simple to run, and allows IT and IT security leaders and professionals to realize the extent of the problem and to easily communicate the issue to the business side.”
Enzoic for Active Directory is likewise simple to install and use, and is built for easy implementation and automatic maintenance of the modern password policy.
“It’s a low complexity tool, but this is where it really shines: it allows you to screen passwords against a massive database of compromised passwords that gets updated every day – and allows you to do this at lightning speed, so that it can be done at the time that the password is being created without any friction or interruption to the user – and it rechecks that password each day, to detect when a password is no longer secure and trigger/mandate a password change.“
Aside from checking the passwords against this constantly updated list, it also prevents users from using:
- Common dictionary words or words that are often used for passwords (e.g., names of sports teams)
- Expected passwords and those that are too similar to users’ old password
- Context-specific passwords and variations (e.g., words that are specific to the business the enterprise is in, or words that employees living in a specific town or region might use)
- User-specific passwords and variations (e.g., their first name, last name, username, email address – based on those field values in Active Directory)
Time and time again, it has been proven that if left to their own devices, users will employ predictable patterns when choosing a password and will reuse one password over multiple accounts.
When the compromised account doesn’t hold sensitive information or allows access to sensitive assets, these practices might not lead to catastrophic results for the user. But the stakes are much higher when it comes to enterprise accounts, and especially Active Directory accounts, as AD is most companies’ primary solution for access to network resources.
Traditional endpoint detection and response (EDR) solutions focus only on endpoint activity to detect attacks. As a result, they lack the context to analyze attacks accurately.
In this interview, Sumedh Thakar, President and Chief Product Officer, illustrates how Qualys fills the gaps by introducing a new multi-vector approach and the unifying power of its Cloud Platform to EDR, providing essential context and visibility to the entire attack chain.
How does Qualys Multi-Vector EDR differ from traditional EDR solutions?
Traditional EDR solutions focus only on endpoint activity, which lacks the context necessary to accurately analyze attacks and leads to a high rate of false positives. This can put an unnecessary burden on incident response teams and requires the use of multiple point solutions to make sense of it all.
Qualys Multi-Vector EDR leverages the strength of EDR while also extending the visibility and capabilities beyond the endpoint to provide a more comprehensive approach to protection. Multi-Vector EDR integrates with the Qualys Cloud Platform to deliver vital context and visibility into the entire attack chain while dramatically reducing the number of false positives and negatives as compared with traditional EDR.
This integration unifies multiple context vectors like asset discovery, rich normalized software inventory, end-of-life visibility, vulnerabilities and exploits, misconfigurations, in-depth endpoint telemetry and network reachability all correlated for assessment, detection and response in a single app. It provides threat hunters and incident response teams with crucial, real-time insight into what is happening on the endpoint.
Vectors and attack surfaces have multiplied. How do we protect these systems?
Many attacks today are multi-faceted. The suspicious or malicious activity detected at the endpoint is often only one small part of a larger, more complex attack. Companies need visibility across the environment to effectively fully understand the attack and its impact on the endpoint—as well as the potential consequences elsewhere on their network. This is where Qualys’ ability to gather and assess the contextual data on any asset via Qualys Global IT Asset Inventory becomes so important.
The goal of EDR is detection and response, but you need a holistic view to do it effectively. When a threat or suspicious activity is detected, you need to act quickly to understand what the information or indicator means, and how you can pivot to take action to prevent any further compromise.
How can security teams take advantage of Qualys Multi-Vector EDR?
Attack prevention and detection are two sides of the same coin for security teams. With current endpoint tools focusing solely on endpoint telemetry, security teams end up bringing in multiple point solutions and threat intelligence feeds to figure out what is happening in their environment.
On top of it, they need to invest their budget and time in integrating these solutions and correlating data for actionable insights. With Qualys EDR, security teams can continuously collate asset telemetry such as process, files and hashes to detect malicious activities and correlate with natively integrated threat intel for prioritization score-based response actions.
Instead of reactively taking care of malicious events one endpoint at a time, security teams can easily pivot to inspect other endpoints across the hybrid infrastructure for exploitable vulnerabilities, MITRE-based misconfigurations, end-of-life or unapproved software and systems that lack critical patches.
Additionally, through native workflows that provide exact recommendations, security and IT teams can patch or remediate the endpoints for the security findings. This is an improvement over previous methods which require handshaking of data from one tool to another via complex integrations and manual workflows.
For example, Qualys EDR can help security teams not only detect MITRE-based attacks and malicious connections due to RDP (remote desktop) exploitation but can also provide visibility across the infrastructure. This highlights endpoints that can connect to the exploited endpoint and have RDP vulnerabilities or a MITRE-mapped configuration failure such as LSASS. Multi-Vector EDR then lets the user patch vulnerabilities and automatically remediate misconfigurations.
Thus, Qualys’ EDR solution is designed to equip security teams with advanced detections based on multiple vectors and rapid response and prevention capabilities, minimizing human intervention, simplifying the entire security investigation and analyze processes for organizations of all sizes. Security practitioners can sign up for a free trial here.
What response strategies does Qualys Multi-Vector EDR use?
Qualys EDR with its multi-layered, highly scalable cloud platform, retains telemetry data for active and historical view and natively correlates it with multiple external threat intelligent feeds. This eliminates the need to rely on a single malware database and provides a prioritized risk-based threat view. This helps security teams hunt for the threats proactively and reactively with unified context of all security vectors, reducing alert fatigue and helping security teams concentrate on what is critical.
Qualys EDR provides comprehensive response capabilities that go beyond traditional EDR options, like killing process and network connections, quarantining files, and much more. In addition, it uniquely orchestrates responses such as preventing future attacks by correlating exploitable-to-malware vulnerabilities automatically, patching endpoints and software directly from the cloud and downloading patches from the vendor’s website, without going through the VPN bandwidth.
91 percent of people know that using the same password on multiple accounts is a security risk, yet 66 percent continue to use the same password anyway. IT security practitioners are aware of good habits when it comes to strong authentication and password management, yet often fail to implement them due to poor usability or inconvenience.
To select a suitable password management solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Simran Anand, Head of B2B Growth, Dashlane
An organization’s security chain is only as strong as its weakest link – so selecting a password manager should be a top priority among IT leaders. While most look to the obvious: security (high grade encryption, 2FA, etc.), support, and price, it’s critical to also consider the end-user experience. Why? Because user adoption remains by far IT’s biggest challenge. Only 17 percent of IT leaders incorporate the end-UX when evaluating password management tools.
It’s not surprising, then, that those who have deployed a password manager in their company report only 23 percent adoption by employees. The end-UX has to be a priority for IT leaders who aim to guarantee secure processes for their companies.
Password management is too important a link in the security chain to be compromised by a lack of adoption (and simply telling employees to follow good password practices isn’t enough to ensure it actually happens). For organizations to leverage the benefits of next-generation password security, they need to ensure their password management solution is easy to use – and subsequently adopted by all employees.
Gerald Beuchelt, CISO, LogMeIn
As the world continues to navigate a long-term future of remote work, cybercriminals will continue to target users with poor security behaviors, given the increased time spent online due to COVID-19. Although organizations and people understand that passwords play a huge role in one’s overall security, many continue to neglect best password practices. For this reason, businesses should implement a password management solution.
It is essential to look for a password management solution that:
- Monitors poor password hygiene and provides visibility to the improvements that could be made to encourage better password management.
- Standardizes and enforces policies across the organization to support proper password protection.
- Provides a secure password management portal for employees to access all account passwords conveniently.
- Reports IT insights to provide a detailed security report of potential threats.
- Equips IT to audit the access controls users have with the ability to change permissions and encourage the use of new passwords.
- Integrates with previous and existing infrastructure to automate and accelerate workflows.
- Oversees when users share accounts to maintain a sense of security and accountability.
Using a password management solution that is effective is crucial to protecting business information. Finding the right solution will not only help to improve employee password behaviors but also increase your organization’s overall online security.
Michael Crandell, CEO, Bitwarden
Employees, like many others, face the daily challenge of remembering passwords to securely work online. A password manager simplifies generating, storing, and sharing unique and complex passwords – a must-have for security.
There are a number of reputable password managers out there. Businesses should prioritize those that work cross-platform and offer affordable plans. They should consider if the solution can be deployed in the cloud or on-premises. A self-hosting option is often preferred by some organizations for security and internal compliance reasons.
Password managers need to be easy-to-use for every level of user – from beginner to advanced. Any employee should be able to get up and running in minutes on the devices they use.
As of late, many businesses have shifted to a remote work model, which has highlighted the importance of online collaboration and the need to share work resources online. With this in mind, businesses should prioritize options that provide a secure way to share passwords across teams. Doing so keeps everyone’s access secure even when they’re spread out across many locations.
Finally, look for password managers built around an open source approach. Being open source means the source code can be vetted by experienced developers and security researchers who can identify potential security issues, and even contribute to resolving them.
Matt Davey, COO, 1Password
65% of people reuse passwords for some or all of their accounts. Often, this is because they don’t have the right tools to easily create and use strong passwords, which is why you need a password manager.
Opt for a password manager that gives you oversight over the things that matter most to your business: from who’s signed in from where, who last accessed certain items, or which email addresses on your domain have been included in a breach.
To keep the admin burden low, look for a password manager that allows you to manage access by groups, delegate admin powers, and manage users at scale. Depending on the structure of your business, it can be useful to grant access to information by project, location, or team.
You’ll also want to think about how a password manager will fit with your existing IAM/security stack. Some password managers integrate with identity providers, streamlining provisioning and administration.
Above all, if you want your employees to adopt your password manager of choice, make sure it’s easy to use: a password manager will only keep you secure if your employees actually use it.
Enterprise resource planning (ERP) systems are an indispensable tool for most businesses. They allow them to track business resources and commitments in real time and to manage day-to-day business processes (e.g., procurement, project management, manufacturing, supply chain, human resources, sales, accounting, etc.).
The various applications integrated in ERP systems collect, store, manage, and interpret sensitive data from the many business activities, which allows organizations to improve their efficiency in the long run.
Needless to say, the security of such a crucial system and all the data it stores should be paramount for every organization.
Common misconceptions about ERP security
“Since ERP systems have a lot of moving parts, one of the biggest misconceptions is that the built-in security is enough. In reality, while you may not have given access to your company’s HR data to a technologist on your team, they may still be able to access the underlying database that stores this data,” Mike Rulf, CTO of Americas Region, Syntax, told Help Net Security.
“Another misconception is that your ERP system’s access security is robust enough that you can allow people to access their ERP from the internet.”
In actual fact, the technical complexity of ERP systems means that security researchers are constantly finding vulnerabilities in them, and businesses that make them internet-facing and don’t think through or prioritize protecting them create risks that they may not be aware of.
When securing your ERP systems you must think through all the different ways someone could potentially access sensitive data and deploy business policies and controls that address these potential vulnerabilities, Rulf says. Patching security flaws is extremely important, as it ensures a safe environment for company data.
Advice for CISOs
While patching is necessary, it’s true that business leaders can’t disrupt day-to-day business activity for every new patch.
“Businesses need some way to mitigate any threats between when patches are released and when they can be fully tested and deployed. An application firewall can act as a buffer to allow a secure way to access your proprietary technology and information during this gap. Additionally, an application firewall allows you to separate security and compliance management from ERP system management enabling the checks and balances required by most audit standards,” he advises.
He also urges CISOs to integrate the login process with their corporate directory service such as Active Directory, so they don’t have to remember to turn off an employee’s credentials in multiple systems when they leave the company.
To make mobile access to ERP systems safer for a remote workforce, CISOs should definitely leverage multi factor identification that forces employees to prove their identity before accessing sensitive company information.
“For example, Duo sends a text to an employee’s phone when logging in outside the office. This form of security ensures that only the people granted access can utilize those credentials,” he explained.
VPN technology should also be used to protect ERP data when employees access it from new devices and unfamiliar Wi-Fi networks.
“VPNs today can enable organizations to validate these new/unfamiliar devices adhere to a minimum security posture: for example, allowing only devices with a firewall configured and appropriate malware detection tools installed can access the network. In general, businesses can’t really ever know where their employees are working and what network they’re on. So, using VPNs to encrypt that data being sent back and forth is crucial.”
On-premise vs. cloud ERP security?
The various SaaS applications in your ERP, such as Salesforce and Oracle Cloud Apps, leave you beholden to those service providers to manage your applications’ security.
“You need to ask your service providers about their audit compliance and documentation. Because they are providing services critical to your business, you will be asked about these third parties by auditors during a SOC audit. You’ll thus need to expand your audit and compliance process (and the time it takes) to include an audit of your external partners,” Rulf pointed out.
“Also, when you move to AWS or Azure, you’re essentially building a new virtual data center, which requires you to build and invest in new security and management tools. So, while the cloud has a lot of great savings, you need to think about the added and unexpected costs of things like expanded audit and compliance.”
One of the cornerstones of a security leader’s job is to successfully evaluate risk. A risk assessment is a thorough look at everything that can impact the security of an organization. When a CISO determines the potential issues and their severity, measures can be put in place to prevent harm from happening.
To select a suitable risk assessment solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Jaymin Desai, Offering Manager, OneTrust
First, consider what type of assessments or control content as frameworks, laws, and standards are readily available for your business (e.g., NIST, ISO, CSA CAIQ, SIG, HIPAA, PCI DSS, NYDFS, GDPR, EBA, CCPA). This is an area where you can leverage templates to bypass building and updating your own custom records.
Second, consider the assessment formats. Look for a technology that can automate workflows to support consistency and streamline completion. This level of standardization helps businesses scale risk assessments to the line of business users. A by-product of workflow-based structured evaluations is the ability to improve your reporting with reliable and timely insights.
One other key consideration is how the risk assessment solution can scale with your business? This is important in evaluating your efficiencies overtime. Are the assessments static exports to excel, or can they be integrated into a live risk register? Can you map insights gathered from responses to adjust risk across your assets, processes, vendors, and more? Consider the core data structure and how you can model and adjust it as your business changes and your risk management program matures.
The solution should enable you to discover, remediate, and monitor granular risks in a single, easy-to-use dashboard while engaging with the first line of your business to keep risk data current and context-rich with today’s information.
Brenda Ferraro, VP of Third Party Risk, Prevalent
The right risk assessment solution will drive program maturity from compliance, to data breach avoidance, to third-party risk management.
There are seven key fundamentals that must be considered:
- Network repository: Uses the ‘fill out once, use with many approach’ to rapidly obtain risk information awareness.
- Vendor risk visibility: Harmonizes inside-out and outside-in vendor risk and proactively shares actionable insights to enhanced decision-making on prioritization, remediation, and compliance.
- Flexible automation: Helps the enterprise to place focus quickly and accurately on risk management, not administrative tasks, to reduce third-party risk management process costs.
- Enables scalability: Adapts to changing processes, risks, and business needs.
- Tangible ROI: Reduces time and costs associated with the vendor management lifecycle to justify cost.
- Advisory and managed services: Has subject matter experts to assist with improving your program by leveraging the solution.
- Reporting and dashboards: Provides real-time intelligence to drive more informed, risk-based decisions internally and externally at every business level.
The right risk assessment solution selection will enable dynamic evolution for you and your vendors by using real-time visibility into vendor risks, more automation and integration to speed your vendor assessments, and by applying an agile, process-driven approach to successfully adapt and scale your program to meet future demands.
Fred Kneip, CEO, CyberGRX
Organizations should look for a scalable risk assessment solution that has the ability to deliver informed risk-reducing decision making. To be truly valuable, risk assessments need to go beyond lengthy questionnaires that serve as a check the box exercises that don’t provide insight and they need to go beyond a simple outside in rating that, alone, can be misleading.
Rather, risk assessments should help you to collect accurate and validated risk data that enables decision making, and ultimately, allow you to identify and reduce risk ecosystem at the individual level as well as the portfolio level.
Optimal solutions will help you identify which vendors pose the greatest risk and require immediate attention as well as the tools and data that you need to tell a complete story about an organization’s third-party cyber risk efforts. They should also help leadership understand whether risk management efforts are improving the organization’s risk posture and if the organization is more or less vulnerable to an adverse cyber incident than it was last month.
Jake Olcott, VP of Government Affairs, BitSight
Organizations are now being held accountable for the performance of their cybersecurity programs, and ensuring businesses have a strong risk assessment strategy in place can have a major impact. The best risk assessment solutions meet four specific criteria— they are automated, continuous, comprehensive and cost-effective.
Leveraging automation for risk assessments means that the technology is taking the brunt of the workload, giving security teams more time back to focus on other important tasks to the business. Risk assessments should be continuous as well. Taking a point-in-time approach is inadequate, and does not provide the full picture, so it’s important that assessments are delivered on an ongoing basis.
Risk assessments also need to be comprehensive and cover the full breadth of the business including third and fourth party risks, and address the expanding attack surface that comes with working from home.
Lastly, risk assessments need to be cost-effective. As budgets are being heavily scrutinized across the board, ensuring that a risk assessment solution does not require significant resources can make a major impact for the business and allow organizations to maximize their budgets to address other areas of security.
Mads Pærregaard, CEO, Human Risks
When you pick a risk assessment tool, you should look for three key elements to ensure a value-adding and effective risk management program:
1. Reduce reliance on manual processes
2. Reduce complexity for stakeholders
3. Improve communication
Tools that rely on constant manual data entry, remembering to make updates and a complicated risk methodology will likely lead to outdated information and errors, meaning valuable time is lost and decisions are made too late or on the wrong basis.
Tools that automate processes and data gathering give you awareness of critical incidents faster, reducing response times. They also reduce dependency on a few key individuals that might otherwise have responsibility for updating information, which can be a major point of vulnerability.
Often, non-risk management professionals are involved with or responsible for implementation of mitigating measures. Look for tools that are user-friendly and intuitive, so it takes little training time and teams can hit the ground running.
Critically, you must be able to communicate the value that risk management provides to the organization. The right tool will help you keep it simple, and communicate key information using up-to-date data.
Steve Schlarman, Portfolio Strategist, RSA Security
Given the complexity of risk, risk management programs must rely on a solid technology infrastructure and a centralized platform is a key ingredient to success. Risk assessment processes need to share data and establish processes that promote a strong governance culture.
Choosing a risk management platform that can not only solve today’s tactical issues but also lay a foundation for long-term success is critical.
Business growth is interwoven with technology strategies and therefore risk assessments should connect both business and IT risk management processes. The technology solution should accelerate your strategy by providing elements such as data taxonomies, workflows and reports. Even with best practices within the technology, you will find areas where you need to modify the platform based on your unique needs.
The technology should make that easy. As you engage more front-line employees and cross-functional groups, you will need the flexibility to make adjustments. There are some common entry points to implement risk assessment strategies but you need the ability to pivot the technical infrastructure towards the direction your business needs.
You need a flexible platform to manage multiple dimensions of risk and choosing a solution provider with the right pedigree is a significant consideration. Today’s risks are too complex to be managed with a solution that’s just “good enough.”
Yair Solow, CEO, CyGov
The starting point for any business should be clarity on the frameworks they are looking to cover both from a risk and compliance perspective. You will want to be clear on what relevant use cases the platform can effectively address (internal risk, vendor risk, executive reporting and others).
Once this has been clarified, it is a question of weighing up a number of parameters. For a start, how quickly can you expect to see results? Will it take days, weeks, months or perhaps more? Businesses should also weigh up the quality of user experience, including how difficult the solution is to customize and deploy. In addition, it is worth considering the platform’s project management capabilities, such as efficient ticketing and workflow assignments.
Usability aside, there are of course several important factors when it comes to the output itself. Is the data produced by the solution in question automatically analyzed and visualized? Are the automatic workflows replacing manual processes? Ultimately, in order to assess the platform’s usefulness, businesses should also be asking to what extent the data is actionable, as that is the most important output.
This is not an exhaustive list, but these are certainly some of the fundamental questions any business should be asking when selecting a risk assessment solution.
As time passes, state-backed hacking is becoming an increasingly bigger problem, with the attackers stealing money, information, credit card data, intellectual property, state secrets, and probing critical infrastructure.
While Chinese, Russian, North Korean and Iranian state-backed APT groups get most of the spotlight (at least in the Western world), other nations are beginning to join in the “fun.”
It’s a free for all, it seems, as the world has yet to decide on laws and norms regulating cyber attacks and cyber espionage in peacetime, and find a way to make nation-states abide by them.
There is so far one international treaty on cybercrime (The Council of Europe Convention on Cybercrime) that is accepted by the nations of the European Union, United States, and other likeminded allies, notes Dr. Panayotis Yannakogeorgos, and it’s contested by Russia and China, so it is not global and only applies to the signatories.
Dr. Yannakogeorgos, who’s a professor and faculty lead for a graduate degree program in Global Security, Conflict, and Cybercrime at the NYU School of Professional Studies Center for Global Affairs, believes this treaty could be both a good model text on which nations around the world can harmonize their own domestic criminal codes, as well as the means to begin the lengthy diplomatic negotiations with Russia and China to develop an international criminal law for cyber.
Cyber deterrence strategies
In the meantime, states are left to their own devices when it comes to devising a cyber deterrence strategy.
The US has been publicly attributing cyber espionage campaigns to state-backed APTs and regularly releasing technical information related to those campaigns, its legislators have been introducing legislation that would lead to sanctions for foreign individuals engaging in hacking activity that compromises economic and national security or public health, and its Department of Justice has been steadily pushing out indictments against state-backed cyber attackers and spies.
But while, for example, indictments by the US Department of Justice cannot reasonably be expected to result in the extradition of a hacker who has been accused of stealing corporate or national security secrets, the indictments and other forms of public attribution of cyber enabled malicious activities serve several purposes beyond public optics, Dr. Yannakogeorgos told Help Net Security.
“First, they send a clear signal to China and the world on where the United States stands in terms of how governmental resources in cyberspace should be used by responsible state actors. That is, in order to maintain fair and free trade in a global competitive environment, a nation’s intelligence services should not be engaged in stealing corporate secrets and then handing those secrets over to companies for their competitive advantage in global trade,” he explained.
“Second, making clear attribution statements helps build a framework within which the United States can work with our partners and allies on countering threats. This includes joint declarations with allies or multilateral declarations where the sources of threats and the technical nature of the infrastructure used in cyber espionage are declared.”
Finally, when public attribution is made, technical indicators of compromise, toolsets used, and other aspects are typically released as well.
“These technical releases have a very practical impact in that they ‘burn’ the infrastructure that a threat actor took time, money, and talent to develop and requires them to rebuild or retool. Certainly, the malware and other infrastructure can still be used against targets that have not calibrated their cyber defenses to block known pathways for attack. Defense is hard, and there is a complex temporal dimension to going from public indicators of compromise in attribution reports; however, once the world knows it begins to also increase the cost on the attacker to successfully hack a target,” he added.
“In general, a strategy that is focused on shaping the behavior of a threat needs to include actively dismantling infrastructure where it is known. Within the US context, this has been articulated as persistently engaging adversaries through a strategy of ‘defending forward.’”
The problem of attack attribution
The issue of how cyber attack attribution should be handled and confirmed also deserves to be addressed.
Dr. Yannakogeorgos says that, while attribution of cyber attacks is definitely not as clear-cut as seeing smoke coming out of a gun in the real world, with the robust law enforcement, public private partnerships, cyber threat intelligence firms, and information sharing via ISACs, the US has come a long way in terms of not only figuring out who conducted criminal activity in cyberspace, but arresting global networks of cyber criminals as well.
Granted, things get trickier when these actors are working for or on behalf of a nation-state.
“If these activities are part of a covert operation, then by definition the government will have done all it can for its actions to be ‘plausibly deniable.’ This is true for activities outside of cyberspace as well. Nations can point fingers at each other, and present evidence. The accused can deny and say the accusations are based on fabrications,” he explained.
“However, at least within the United States, we’ve developed a very robust analytic framework for attribution that can eliminate reasonable doubt amongst friends and allies, and can send a clear signal to planners on the opposing side. Such analytic frameworks could become norms themselves to help raise the evidentiary standard for attribution of cyber activities to specific nation states.”
A few years ago, Paul Nicholas (at the time the director of Microsoft’s Global Security Strategy) and various researchers proposed the creation of an independent, global organization that would investigate and publicly attribute major cyber attacks – though they admitted that, in some cases, decisive attribution may be impossible.
More recently, Kristen Eichensehr, a Professor of Law at the University of Virginia School of Law with expertise in cybersecurity issues and cyber law, argued that “states should establish an international law requirement that public attributions must include sufficient evidence to enable crosschecking or corroboration of the accusations” – and not just by allies.
“In the realm of nation-state use of cyber, there have been dialogues within the United Nations for nearly two decades. The most recent manifestation is the UN Group of Governmental Experts that have discussed norms of responsible state behavior and issued non-binding statements to guide nations as they develop cyber capabilities,” Dr. Yannakogeorgos pointed out.
“Additionally, private sector actors, such as the coalition declaring the need for a Geneva Convention for cyberspace, also have a voice in the articulation of norms. Academic groups such as the group of individuals involved in the research, debating, and writing of the Tallinn Manuals 1.0 and 2.0 are also examples of scholars who are articulating norms.”
And while articulating and agreeing to specific norms will no doubt be a difficult task, he says that their implementation by signatories will be even harder.
“It’s one thing to say that ‘states will not target each other’s critical infrastructure in cyberspace during peacetime’ and another to not have a public reaction to states that are alleged to have not only targeted critical infrastructure but actually caused digital damage as a result of that targeting,” he concluded.
As COVID-19 forced organizations to re-imagine how the workplace operates just to maintain basic operations, HR departments and their processes became key players in the game of keeping our economy afloat while keeping people alive.
Without a doubt, people form the core of any organization. The HR department must strike an increasingly delicate balance while fulfilling the myriad of needs of workers in this “new normal” and supporting organizational efficiency. As the tentative first steps of re-opening are being taken, many organizations remain remote, while others are transitioning back into the office environment.
Navigating the untested waters of managing HR through this shift to remote and back again is complex enough without taking cybercrime and data security into account, yet it is crucial that HR do exactly that. The data stored by HR is the easy payday cybercriminals are looking for and a nightmare keeping CISOs awake at night.
Why securing HR data is essential
If compromised, the data stored by HR can do a devastating amount of damage to both the company and the personal lives of its employees. HR data is one of the highest risk types of information stored by an organization given that it contains everything from basic contractor details and employee demographics to social security numbers and medical information.
Many state and federal laws and regulations govern the storage, transmission and use of this high value data. The sudden shift to a more distributed workforce due to COVID-19 increased risks because a large portion of the HR workforce being remote means more and higher access levels across cloud, VPN, and personal networks.
Steps to security
Any decent security practitioner will tell you that no security setup is foolproof, but there are steps that can me taken to significantly reduce risk in an ever-evolving environment. A multi-layer approach to security offers better protection than any single solution. Multiple layers of protection might seem redundant, but if one layer fails, the other layers work fill in gaps.
Securing HR-related data needs to be approached from both a technical and end user perspective. This includes controls designed to protect the end user or force them into making appropriate choices, and at the same time providing education and awareness so they understand how to be good stewards of their data.
Secure the identity
The first step to securing HR data is making sure that the ways in which users access data are both secure and easy to use. Each system housing HR data should be protected by a federated login of some variety. Federated logins use a primary source of identity for managing usernames and passwords such as Active Directory.
When a user uses a federated login, the software utilizes a system like LDAP, SAML, or OAuth to query the primary source of identity to validate the username and password, as well as ensure that the user has appropriate rights to access. This ensures that users only have to learn one username and password and we can ensure that the password complies with organizationally mandated complexity policies.
The next step to credential security is to add a second factor of authentication on every system storing HR data. This is referred to as Multi-factor Authentication (MFA) and is a vital preventative measure when used well. The primary rule of MFA says that the second factor should be something “the user is or has” to be most effective.
This second factor of authentication can be anything from a PIN generated on a mobile device to a biometric check to ensure the person entering the password is, in fact, the actual owner. Both of these systems are easy for end users to use and add very little additional friction to the authentication effort, while significantly reducing the risk of credential theft, as it’s difficult for someone to compromise users’ credentials and steal their mobile device or a copy of their fingerprints.
In today’s world, HR users working from somewhere other than the office is not unusual. With this freedom comes the need to secure the means by which they access data, regardless of the network they are using. The best way to accomplish this is to set up a VPN and ensure that all HR systems are only accessible either from inside of the corporate network or from IPs that are connected to the VPN.
A VPN creates an encrypted tunnel between the end user’s device and the internal network. The use of a VPN protects the user against snooping even if they are using an unsecured network like a public Wi-Fi at a coffee shop. Additionally, VPNs require authentication and, if that includes MFA, there are three layers of security to ensure that the person connecting in is a trusted user.
Next, you have to ensure that access is being used appropriately or that no anomalous use is taking place. This is done through a combination of good logging and good analytics software. Solutions that leverage AI or ML to review how access is being utilized and identify usage trends further increase security. The logging solution verifies appropriate usage while the analysis portion helps to identify any questionable activity taking place. This functions as an early warning system in case of compromised accounts and insider threats.
Comprehensive analytics solutions will notice trends in behavior and flag an account if the user changes their normal routine. If odd activity occurs (e.g., going through every HR record), the system alerts an administrator to delve deeper into why this user is viewing so many files. If it notices access occurring from IP ranges coming in through the VPN from outside of the expected geographical areas, accounts can be automatically disabled while alerts are sent out and a deeper investigation takes place. This are ways to shrink the scope of an incident and reduce the damage should an attack occur.
Secure the user
Security awareness training for end users is one of the most essential components of infrastructure security. The end user is a highly valuable target because they already have access to internal resources. The human element is often considered a high-risk factor because humans are easier to “hack” than passwords or automatic security controls.
Social engineering attacks succeed when people aren’t educated to spot red flags indicating an attack is being attempted. Social engineering attacks are the easiest and least costly option for an attacker because any charismatic criminal with good social skills and a mediocre acting ability can be successful. The fact that this type of cyberattack requires no specialized technical skill expands the potential number of attackers.
The most important step of a solid layered security model is the one that prevent these attacks through education and awareness. By providing end users engaging, thorough, and relevant training about types of attacks such as phishing and social engineering, organizations arm their staff with the tools they need to avoid malicious links, prevent malware or rootkit installation, and dodge credential theft.
No perfect security
No matter where the job gets done, HR needs to deliver effective services to employees while still taking steps to keep employee data safe. Even though an organization cannot control every aspect of how work is getting done, these steps will help keep sensitive HR data safe.
Control over accounts, how they are monitored, and what they are accessing are important steps. Arming the end user directly, with the awareness needed to prevent having their good intentions weaponized, requires a combination of training and controls that create a pro-active system of prevention, early warnings, and swift remediation. There is no perfect security solution for protecting HR data, but multiple, overlapping security layers can protect valuable HR assets without making it impossible for HR employees to do their work.
Endpoint protection has evolved to safeguard from complex malware and evolving zero-day threats.
To select an appropriate endpoint protection solution for your business, you need to think about a variety of factors. We’ve talked to several cybersecurity professionals to get their insight on the topic.
Theresa Lanowitz, Head of Evangelism, AT&T Cybersecurity
Corporate endpoints represent a top area of security risk for organizations, especially considering the shift to virtual operations brought on by COVID-19. As malicious actors target endpoints with new types of attacks designed to evade traditional endpoint prevention tools, organizations must seek out advanced endpoint detection and response (EDR) solutions.
Traditionally, enterprise EDR solutions carry high cost and complexity, making it difficult for organizations to implement EDR successfully. While many security teams recognize the need for EDR, most do not have the resources to manage a standalone endpoint security solution.
For this reason, when selecting an EDR solution, it’s critical to seek a unified solution for threat detection, incident response and compliance, to be incorporated into an organization’s existing security stack, eliminating any added cost or complexity. Look for endpoint solutions where security teams can deploy a single platform that delivers advanced EDR combined with many other essential security capabilities in a single pane of glass, in an effort to drive efficiency of security and network operations.
Overall, organizations should select an EDR solution that enables security teams to detect and respond to threats faster while eliminating the cost and complexity of maintaining yet another point security solution. This approach can help organizations bolster their cybersecurity and network resiliency, with an eye towards securing the various endpoints used in today’s virtual workforce.
Rick McElroy, Cyber Security Strategist, VMware Carbon Black
With the continuously evolving threat landscape, there are a number of factors to consider during the selection process. Whether a security team is looking to replace antiquated malware prevention or empower a fully-automated security operations process, here are the key considerations:
- Does the platform have the flexibility for your environment? Not all endpoints are the same, therefore broad coverage of operating systems is a must.
- Does the vendor support the MITRE ATT&CK Framework for both testing and maturing the product? Organizations need to test security techniques, validate coverage and identify gaps in their environments, and implement mitigation to reduce attack surface.
- Does it provide deeper visibility into attacks than traditional antivirus? Organizations need deeper context to make a prevention, detection or response decision.
- Does the platform provide multiple security functionality in one lightweight sensor? Compute is expensive, endpoint security tools should be as non-impactful to the system as possible.
- Is the platform usable at scale? If your endpoint protection platform isn’t centrally analyzing behaviors across millions of endpoints, it won’t be able to spot minor fluctuations in normal activity to reveal attacks.
- Does the vendor’s roadmap meet the future needs of the organization? Any tool selected should allow teams the opportunity for growth and ability to use it for multiple years, building automated processes around it.
- Does the platform have open APIs? Teams want to integrate endpoints with SEIM, SOAR platforms and network security systems.
David Ngo, VP Metallic Products and Engineering, Commvault
With millions working remotely due to COVID-19, laptop endpoints being used by employees while they work from home are particularly vulnerable to data loss.
This has made it more important than ever for businesses to select a strong endpoint protection solution that:
- Lowers the risk of lost data. The best solutions have automated backups that run multiple times during the day to ensure recent data is protected and security features such as geolocation and remote wipe for lost or stolen laptops. Backup data isolation from source data can also provide an extra layer of protection from ransomware. In addition, anomaly detection capabilities can identify abnormal file access patterns that indicate an attack.
- Enables rapid recovery. If an endpoint is compromised, the solution should accelerate data recovery by offering metadata search for quick identification of backup data. It’s also important for the solution to provide multiple granular restore options – including point in time, out of place, and cross OS restores – to meet different recovery needs.
- Limits user and IT staff administration burdens. Endpoint solutions with silent install and backup capabilities require no action from end users and do not impact their productivity. The solution should also allow users and staff to access backup data, anytime, anywhere, from a browser-enabled device, and make it possible for employees to search and restore files themselves.
James Yeager, VP of Public Sector, CrowdStrike
Decision-makers seeking the best endpoint protection (EPP) solution for their business should be warned legacy security solutions are generally ineffective, leaving organizations highly susceptible to breaches, placing a huge burden on security teams and users.
Legacy tools, engineered by on-premises architectures, are unable to keep up with the capabilities made available in a modern EPP solution, like collecting data in real-time, storing it for long periods and analyzing it in a timely manner. Storing threat telemetry data in the cloud makes it possible to quickly search petabytes of data in an effort to glean historical context for activities running on any managed system.
Beware of retrofitted systems from vendors advertising newer “cloud-enabled” features. Simply put, these “bolt-on” models are unable to match the performance of a cloud-native solution. Buyers run the risk of their security program becoming outdated with tools that cannot scale to meet the growing needs of today’s modern, distributed workforce.
Furthermore, comprehensive visibility into the threat landscape and overall IT hygiene of your enterprise are foundational for efficient security. Implementing cloud-native endpoint detection and response (EDR) capabilities into your security stack that leverages machine learning will deliver visibility and detection for threat protection across the entire kill chain. Additionally, a “hygiene first” approach will help you identify the most critical risk areas early-on in the threat cycle.
Dustin Rigg Hillard, CTO at eSentire, is responsible for leading product development and technology innovation. His vision is rooted in simplifying and accelerating the adoption of machine learning for new use cases.
In this interview Dustin talks about modern digital threats, the challenges cybersecurity teams face, cloud-native security platforms, and more.
What types of challenges do in-house cybersecurity teams face today?
The main challenges that in-house cybersecurity teams have to deal with today are largely due to ongoing security gaps. As a result, overwhelmed security teams don’t have the visibility, scalability or expertise to adapt to an evolving digital ecosystem.
Organizations are moving toward the adoption of modern and transformative IT initiatives that are outpacing the ability of their security teams to adapt. For security teams, this means constant change, disruptions with unknown consequences, increased risk, more data to decipher, more noise, more competing priorities, and a growing, disparate, and diverse IT ecosystem to protect. The challenge for cybersecurity teams is finding effective ways to deliver and maintain security at the speed of digital transformation, ensuring that every new technology, digital process, customer and partner interaction and innovation is protected.
Cybercrime is being conducted at scale, and threat actors are constantly changing techniques. What are the most significant threats at the moment?
Threat actors, showing their usual agility, have shifted efforts to target remote workers and take advantage of current events. We are seeing attackers exploiting user behavior by misleading users into opening and executing a malicious file, going to a malicious site or handing over information, typically using lures which create urgency (e.g., by masquerading as payment and invoice notifications) or leverage current crises and events.
What are the main benefits of cloud-native security platforms?
A cloud-native platform offers important advantages over legacy approaches—advantages that provide real, important benefits for cybersecurity providers and the clients who depend on them.
- A cloud-native architecture is more easily extensible, which means more features, sooner, to enable analysts and protect clients
- A cloud-native platform offers higher performance because the microservices inside it can maximally utilize the cloud’s vast compute, storage and network resources; this performance is necessary to ingest and process the vast streams of data which need to be processed to keep up with real-time threats
- A cloud-native platform can effortlessly scale to handle increased workloads without degradation to performance or client experience
Security platforms usually deliver a variety of metrics, but how does an analyst know which ones are meaningful?
The most important metrics are:
- How platform delivers security outcomes
- How many threats were stopped with active response?
- How many potentially malicious connections were blocked?
- How many malware executions were halted?
- How quickly was a threat contained after initial detection?
Modern security platforms help simplify data analytics by delivering capabilities that amplify threat detection, response and mitigation activities; deliver risk-management insights; and help organizations stay ahead of potential threats.
Cloud-native security platforms can output a wide range of data insights including information about threat actors, indicators of compromise, attack patterns, attacker motivations and capabilities, signatures, CVEs, tactics, and vulnerabilities.
How can security teams take advantage of the myriad of security tools that have been building in the organization’s IT ecosystem for many years?
Cloud-native security platforms ingest data from a wide variety of sources such as security devices, applications, databases, cloud systems, SaaS platforms, IoT devices, network traffic and endpoints. Modern security platforms can correlate and analyze data from all available sources, providing a complete picture of the organization’s environment and security posture for effective decision-making.