(ISC)2 Professional Development Institute: Timely and continuing education opportunities

In this Help Net Security podcast, Mirtha Collin, Director of Education for (ISC)², talks about the Professional Development Institute (PDI), a valuable resource for continuing education opportunities to help keep your skills sharp and curiosity piqued.

Each course is designed with input from leading industry experts and based on proven learning techniques. And best of all, these courses are free to members and count for CPEs.

Professional Development Institute

Here’s a transcript of the podcast for your convenience.

Hi, my name is Mirtha Collin and I’m the Director of Education for (ISC)². I’m happy to have the opportunity to join this Help Net Security podcast today to talk to you a little bit about the Professional Development Institute, a major initiative for continuing cybersecurity education that we’re really excited about.

Just to quickly set the table for those listening who may not be aware, (ISC)² is an international nonprofit membership association focused on inspiring a safe and secure cyber world. Best known for the Certified Information Systems Security Professional certification – or CISSP for short – (ISC)² offers a portfolio of credentials that are part of a holistic, programmatic approach to security. Our membership, over 150,000 strong, is made up of certified cyber, information, software and infrastructure security professionals who are making a difference and helping to advance the industry.

The Education Department at (ISC)² develops and delivers training materials and courses that help the cybersecurity community achieve certification and also provides learning opportunities to keep their skills sharp and maintain their certifications. We celebrated our 30th anniversary last year as advocates for the cybersecurity profession, and what I’d like to talk to you about today – PDI – has been a huge step forward for our association.

The Professional Development Institute (which we’ve shortened to “PDI” for obvious reasons) was launched by (ISC)² in February 2019 in an effort to deliver increased member value and keep our members and associates, as well as other industry participants, up to speed on the latest emerging trends in cybersecurity.

A state-of-the-art video production studio was also built in our headquarters to produce engaging high-production content for courses authored by leading cybersecurity professionals.

Let’s back up a minute though. It’s important to understand the lay of the land in cybersecurity education, and why we thought making a major investment in continuing education was something worth doing.

When it comes to certification, (ISC)² exams – as well as the exams of various accrediting bodies in the industry – probe our members on a wide array of knowledge domains to prove that they have the practical skills it takes to manage security systems. The exams focus on real-world examples that only experienced professionals will be familiar with. So, it’s a great system for separating the really knowledgeable pros from those who still need more time in the trenches.

However, cybersecurity is one of the more dynamic fields in the world, and the landscape and technological changes come frequently. What may have been applicable two years ago may no longer be of critical importance, and new challenges and solutions spring up on an annual and sometimes monthly basis.

While (ISC)² routinely updates its exams to make sure the most relevant topics are being covered, certification updates take time to build and process, and don’t happen each and every year. And then there are the “soft skills” aspects of the job that aren’t conducive to testing but are useful to develop, such as how to present to your executive leadership or how to build a high-performing team.

This can create certain gaps in curriculum when rapidly emerging trends develop in a short window of time. And those who became certified several years ago need to keep their skills sharp too, even if they don’t have an exam coming up anytime soon.

This is where PDI comes in and why we think it’s such a revolutionary step in education. This program has resulted in the development of a robust catalogue of continuing professional education courses and the ability to continuously refresh that catalogue based upon clearly articulated member need. So, in other words, as new topics and trends bubble to the surface, we have the ability to quickly design courses to address them and give cybersecurity professionals the ammo they need to be able to understand the basic concepts, at a minimum.

Subject matter experts guide the development of the course material and are supported by a team of highly qualified adult education experts and creative professionals.

We also recognized that cybersecurity professionals have very busy jobs, and don’t normally have a lot of free time to attend classes, which is why we knew that we had to build an on-demand library of courses that they could access whenever they want, at the push of a button from wherever they are in the world.

Given the nature of the different trends in cybersecurity, this is not a one-size-fits-all approach to education either. Some topics understandably require more of a time investment than others to fully grasp. This is why the PDI portfolio includes three formats of courses: Immersive courses are designed to provide an in-depth course on a single topic; Labs are hands-on courses designed to allow students to practice specific technical skills; and Express Learning courses are typically 1-2 hours in length – some are even doable during a lunch break – and they’re designed especially to quickly address emerging industry topics or trends or introduce the learner to a topic.

I think what we’re most proud of so far, in addition to the quality, is the broad range of topic areas we’ve addressed through PDI, which include working with the Internet of Things, industrial control systems, containers, privacy regulations, cyber insurance, mobile security, AI, and the NIST Cybersecurity Framework, as well as building skills such as penetration testing, malware analysis, interpersonal skills, cloud basics, communication with the C-suite, responding to a breach, and many more.

We tailor these courses for those learners with a basic to intermediate knowledge of security concepts, so they can be informative and challenging to almost any learner. And the topics are also designed to be universal so that they apply to what anyone around the world is facing.

In addition to helping learners stay updated on the latest trends, PDI also offers an opportunity for members and associates of (ISC)² to obtain continuing professional education – or CPE – credits to keep their certifications in good standing at no additional cost. More than 100 CPEs can be earned by completing all the courses in the PDI portfolio.

Because of this, all courses include a final assessment. Other learning activities vary by course type and may include instructional videos, video interviews, interactive presentations, knowledge checks, independent readings, webinar excerpts and real-world scenarios.

This was a major undertaking that the entire association got behind, and the program now contains 35 courses, with a total educational value of more than $10,000 per person available.

It’s been so popular that more than 20,000 unique members had enrolled in a PDI course by the end of December last year, which means we delivered more than $7.9 million in equivalent course value within the first 10 months of the program being available.

I should mention that as far as we know, this is the only program of its kind in the industry, where members can get all of this value at no additional cost. Additionally, we are making our courses available to non-members at deeply discounted prices as a way to encourage continuous learning during the COVID-19 crisis. For more information about this, please go to isc2.org/development.

The feedback we’ve received from our members so far has been outstanding and they’re really engaging with the materials and recommending courses to their friends and colleagues, as well as submitting ideas for future courses to [email protected].

Thanks for listening today and thanks to RSA Conference and Help Net Security for giving us an opportunity to spread the message about PDI. You’re all welcome to come check out the content.

If you’d like to explore the PDI portfolio, you can either access My Courses if you’re a member or associate of (ISC)² or simply visit isc2.org/development if you’re not yet a member.

How to formulate a suitable identity proofing strategy

In this podcast, Matt Johnson, Product Marketing Manager at TransUnion, talks about identity proofing and navigating identity during changing economic dynamics. By the end of this session, you’ll have an understanding of how to formulate an appropriate identity proofing strategy to meet the needs of your customers and online channels.

identity proofing strategy

Here’s a transcript of the podcast for your convenience.

Hi, I’m Matt Johnson, Product Marketing Manager for Fraud and Identity at TransUnion. In this Help Net Security podcast, I’ll be speaking about identity proofing and navigating identity during changing economic dynamics. By the end of this session, you’ll have an understanding of how to formulate an appropriate identity proofing strategy to meet the needs of your customers and online channels.

What is identity proofing?

At its core, the concept of identity proofing seems simple. Identity proofing is the means of verifying and authenticating the identities of legitimate consumers while preventing fraudsters from curating account credentials, transacting or gaining access to unauthorized accounts. But this is where the simplicity ends and where the balancing act of delivering the modern friction right experience that consumers demand against the risk of fraud begins.

Evolution of attack vectors

As data breaches have grown in size and scope over recent years, the vectors of attack have evolved, as fraudsters obtained large amounts of consumer information. Here’s a few trends that are worth noting. Through Q3 of 2019 there were 7.9 billion exposed consumer PII records, an increase of 33% over the same period in 2018.

Synthetic identity fraud continues to be a threat and is still on the rise, and fraudsters are turning to more sophisticated techniques including device simulators, botnets and anonymization to attack vulnerable organizations. Now, as these threats have increased in size and scope, consumers have also simultaneously been increasing their transaction volumes within digital channels.

Pandemic impact

The COVID-19 pandemic placed unprecedented demands within online channels, and has caused immediate changes in shopping patterns, along with an unprecedented spike in digital transactions. In fact, according to TransUnion data, there was a 23% increase in e-commerce transactions in just the first week after the declaration of the COVID-19 pandemic on March 11th, 2020, and at the time of this recording, a 14% increase in risky financial services transactions.

The shift to digital transactions has been steady in recent years, but some industries such as insurance and even still many financial institutions, have not kept pace with the desire of their customers to move online. The current world issues are serving as a catalyst, forcing this migration as the shift that digital transactions has now been brought forward and is accelerating, with all indications of this being a permanent change.

The changing face of fraud

Many organizations have found that their infrastructure has not been able to scale to meet the increased demand from their customers, most visibly resulting in online outages for financial institutions. With increased transactions and faceless channels, the opportunity for fraud has also increased.

Fraudsters thrive on the uncertainties of today and the foreseeable future and are leveraging the world events for fresh attacks on organizations that are least prepared. Nearly every company has taken measures to assure some level of certainty of who they are doing business with, driven in part by regulations and rules such as know your customer or KYC, anti-money laundering and OFAC guidelines depending on your industry.

But many organizations have been hesitant to go beyond the minimum compliance requirements. Acquiring a customer is challenging enough, without inserting the additional friction into the fraud prevention process. Some organizations have even been willing to accept some level of fraud, simply writing it off as the cost of doing business.

Combating today’s evolving threats

Fraudsters talk with one another and know who to attack and how to best carry it out. Many companies are not prepared to confront the issues created by the new realities of today. In a normal environment, as new threats have emerged, organizations have typically deployed a myriad of siloed solutions that focus primarily on identity verification to establish identity and knowledge-based authentication to authenticate the individual is who they claim to be.

While it has met the requirements to comply with regulations, this structure has been less than ideal, as fraudsters have increasingly been able to defeat it with breached PII data, necessitating organizations to make that tradeoff between customer experience and locking things down with tighter fraud prevention measures.

Perhaps you’re nodding your head in agreement with this trade off, but it doesn’t have to be this way. Combating today’s evolving threats while being ready for the unexpected is no longer a choice, it’s a necessity.

Protecting your business

Consumers are demanding seamless and safe digital experiences and will seek out organizations that will allow them to do business on their terms and in their preferred channels. You can’t afford to lose business due to outdated or cumbersome fraud controls and not being able to meet your customers where they want to do business. So, this begs the question, how can you protect your business and customers while delivering a friction-right experience, even when the bad actors have perfect information and sophisticated techniques?

Fortunately, it’s possible to deliver that great experience for your customers, without having to make that compromise or tradeoffs. Everything hinges on your identity proofing strategy and more specifically how it is structured along with how you deploy it.

Let’s discuss some best practices to ensure your organization is ready. First, given the rapid shift to online channels, you should consider accelerating any planned investment for your digital channels, as the current world events will bring forward a major shift in how consumers transact, requiring new resilient and innovative business models and workflows. If you don’t have the capacity to serve your customers and they experienced service interruptions, trust can erode from your organization.

As it pertains to identity proofing, as I mentioned, it’s complex. It’s not a good idea to go it alone and attempt to build your own solutions against fraudsters that are experts in finding ways to defeat your countermeasures. In fact, their livelihoods often depend on it. It’s better to partner with a vendor that is dedicated to the cause so that you can focus on your core business.

Identity proofing strategy foundation

The foundation to any good identity proofing strategy is data. You should seek out a partner that has the depth and breadth of diverse public records, consumer credit, personal and digital identity data sources. With this foundation of data, it’s also necessary to possess the expertise, to apply technology to make linkages within the data to enable actionable insights. You can have all the data in the world, but if you can’t take any insights from it, it’s not very useful.

Which solutions do you need?

So, with a strong data foundation, let’s talk about which solutions you need to have. A successful strategy will require multiple solutions, but you don’t want to just add additional point solutions alongside your current mix to solve any challenges you may be having. This can result in suboptimal customer experiences with high false positives and your fraud catch, and increase the latency of transactions.

The best practice is to seek out a holistic solution that can orchestrate all of the necessary solutions for your business together. This approach offers a contextualize and multilayered understanding of a consumer to enable trust and delivery friction-right consumer identity proofing and authentication experience that optimizes convenience and security.

Traditional identity verification and knowledge base authentication solutions are still an important part of the mix, but they are no longer the strategy itself. A modern approach should also include digital attribute risk assessment from the device that is being used, document-centric identity authentication, real-time fraud alerts, behavior analysis, reputation and link analysis, along with dynamic multifactor authentication strategies that are aligned to transaction risk.

You should be able to truly know your customer and quickly approved trusted customers while applying appropriate friction to the higher risk transactions. This results in treating each customer individually rather than everyone in the same manner.

Orchestration

With these solutions in place, let’s discuss the linchpin to enabling everything to work in tandem, providing the strongest decisioning available, and that’s orchestration. Orchestration ties everything together, ensuring low latency times, a precise and accurate picture of a consumer and the risk, and the ability to apply the right level of friction tailored to each individual transaction. The best identity proofing strategy leverages tools that are orchestrated together as a full stack, not simply enabling Bring Your Own Disparate Solutions under a unified orchestration layer, that may only simplify your technical integration.

While account opening is the logical place for identity proofing measures, your business should also shift the proactive continuous risk assessment to monitor account behavior, anomalous consumer behaviors, and flag suspicious transactions in real-time. This will protect against the account takeover, new account fraud, synthetic identity fraud, identify potentially fraudulent transactions or unauthorized account use.

Partner smart to ensure compliance

Finally, it’s essential to work with a partner that has support on the ground in the regions where your business operates, to ensure ongoing regulatory compliance anywhere in the world. With growing privacy awareness by consumers and ever-changing regulatory and compliance regulations, such as GDPR, PSD2, KYC, AML and OFAC, along with others, you can minimize your exposure to running afoul of them by leaving it to a partner that can manage it for you.

Orchestrated identity proofing strategy

The last thing you want to end up with is being in the headlines and enduring reputational damage for violating any of these regulations. So, if you partner with an organization that’s able to deliver an orchestrated identity proofing strategy, here’s what you’ll have to look forward to.

First, you’ll be able to truly identify the identity of new customers creating accounts. You’ll also be able to identify and immediately clear good returning customers. You’ll be able to identify returning customers that may have higher risks, such as coming from new devices or wanting to change account information, and applying appropriate friction via authentication strategies.

You’ll increase your identity proofing match rates through linkages and analytics. You’ll stay ahead of evolving threats with security and solutions that are designed to keep you one step ahead. And most importantly, you’ll increase conversions and revenue from delivering the experiences that consumers expect. You invest heavily in your customer acquisition and retention strategies. Ensure you aren’t driving attrition through antiquated identity proofing strategies.

TransUnion’s uniquely positioned to assist organizations like yours attain these objectives through a modern identity proofing approach. For more information and helpful insights, including webinars and case studies, please visit us at transunion.com.

How can you strengthen an enterprise third-party risk management program?

We sat down with Sean Cronin, CEO of ProcessUnity, to explore the challenges related to enterprise third-party risk today and in the future.

enterprise third-party risk

What are the most unexpected pitfalls for a CISO that wants to strengthen an enterprise third-party risk management program?

Ultimately, you need to understand where your program is today and build a plan to mature it. There are a lot of moving parts in a third-party risk management program. Most companies today are struggling with the work associated with the early phases of a program – the vendor onboarding process, the pre-contract due diligence and then the ongoing monitoring that must occur after a contract is signed. It’s critical to nail these processes first or you’re setting yourself up for failure.

Figure out where you are on the maturity curve first. Do you have an Informal program that’s just getting started? Is your team fighting fires in a reactive mode or have you advanced your processes to a point where you’re more proactive about reducing risk? If you’re already mature and you’re running an optimized program, it’s all about continuous improvement. If you understand the weaknesses and opportunities at your currently maturity level, it makes it easier to put a reasonable plan in place – one that prevents you from trying to take too big of a leap all at once.

Another pitfall is the wildcard that disrupts the proverbial applecart. This year, it’s COVID-19. Organizations are putting their programs on hold because they’re scrambling to reassess their vendors to ensure business continuity during the pandemic. More mature programs build a rapid-response mechanism into their programs, but less mature companies have to drop everything and react as best as possible.

How can an organization transform third-party risk into a competitive advantage?

Before third-party risk management can become a competitive advantage, businesses need to perfect the block-and-tackle basics of third-party risk management. This means having a comprehensive onboarding, due diligence and ongoing monitoring process. Getting those processes effective and efficient allows more time for risk teams to focus on the third-party risk management activities that can drive ROI for the company, including contract management, service-level agreements (SLAs) and performance management.

If your team has more time, they can spend it helping to negotiate better contracts with better financial terms or better services terms – maybe both. Your team will also have access to insights gained during due diligence and ongoing assessments. That data can be used to your advantage during initial negotiations or renewals.

There’s also an opportunity around SLAs. Build a library of SLAs, track where they are being used – on a contract-by-contract or vendor-by-vendor basis and then get your lines-of-business to submit metrics or evidence that results are within acceptable thresholds. Now you have an SLA-enforcement engine. No one wants to collect penalties for a broken promise, but the option is there. You also have the ability to forgo the penalty in exchange for something else – visibility into a product roadmap, input into a new feature, etc. SLAs are an important part of the vendor management process, but many organizations don’t have the time to use them to their advantage.

Finally, managing vendor performance is also a way to get a competitive advantage. If you work with the best vendors, you will get the best service and value. If you can swap out under-performing vendors with better ones over time, your company is going to be in a better place.

Third-party compromise continues to be one of the major drivers of data breaches worldwide. How can organizations make sure that the companies they work with are taking care of their security properly?

Lou Gerstner said it best, “You don’t get what you expect, you get what you inspect.” Hoping that your vendors, suppliers and third parties are just as buttoned-up as your company isn’t enough. The whole point of having a third-party risk program is to systematically assess new and current vendors over time. You need a mixture of self-assessments that the vendors complete and then you need to spot-check your higher-risk vendors with on-site controls assessments – live visits where you ask your vendors to prove they have the proper safeguards in place. It’s work that has to be done – you can’t take their word for it.

Unfortunately, even the best-run third-party risk programs may not be breach-proof – the idea is to prevent as much as possible and make it as hard as possible for a breach to occur.

If you have a strong program in place, you’ll be in a better position to easily understand is what was compromised should a breach occur. For example, in the first hour that a compromise was recognized, it would be great to know exactly what information that vendor owned – patient data, patient records, customer data, customer PII, customer credit cards, etc. A third-party risk management system can help to quickly and easily identify that.

Also, before the breach even happens, the increased due diligence and the periodicity in which organizations continue the evaluation of a third party will continue to drive risk out of that relationship. Ongoing monitoring of a vendor helps organizations better understand what their vendors are and aren’t doing – policies, evidence of specific actions, etc. This develops a dialogue with the vendor to explain why specific actions need to be taken to help drive risk out of both organizations. And that’s how organizations will be able to drive more secure relationships, more secure vendors and more secure providers.

How do you expect risk management strategies to evolve in the next decade? What’s new on the horizon and how can security leaders lay down the groundwork for increased compliance and security?

I was thinking about this a lot while at this year’s RSA Conference. RSAC was very much about the firewalls and the four walls of any corporation, however where security and risk will evolve is an increased importance on third parties. The second an organization puts any data into a third party, that risk is extended and create vulnerabilities that are exponentially worse than what’s within the firewalls or your own four walls.

In third-party risk specifically, we will see more teams incorporate external content into their third-party risk management programs to get a more wholistic view of their vendor population. We will see a rise of utilities and consortiums – where a vendor is assessed once, and multiple organizations can access that assessment. This will allow for a quicker and more streamlined vendor onboarding process. Vendor assessment questionnaires will also continue evolve. Today, we have questionnaires that can self scope based on inherent risk levels and self-score based on a set of preferred responses. This is the start of machine learning and eventually AI for third-party risk.

That’s the next horizon. And it’s exciting because security leaders are seeing the increased importance of that third-party supply chain and vendor ecosystem as part and parcel to their reputational risk and their overall organizational risk.

vFeed: Leveraging actionable vulnerability intelligence as a service indicators

vFeed is a truly exciting company and we had to include them in our list of the 10 hot industry newcomers to watch at RSA Conference 2020. In this podcast, Rachid Harrando, Advisory Board Member at vFeed, talks about how their correlation algorithm analyzes a large plethora of scattered advisories and third-party sources, and then standardizes the content with respect to security industry open standards.

vFeed

Here’s a transcript of the podcast for your convenience.

Hello, my name is Rachid Harrando. I’m in the Office of the CISO at ServiceNow and partner and advisor for vFeed.io that I will introduce today.

What is vFeed? We would like to tagline vFeed with vulnerability intelligence as a service. That’s our tagline. Of course, we have to explain what it is, right? What we found out is there are more and more systems that have more and more vulnerabilities. And it’s difficult for any security team to maintain a good repository of all the different indicators and information related to those vulnerabilities.

The founders of vFeed have spent many years doing that tracking to do their security job. That’s where the idea comes from, to maintain an accurate and complete database that you can quickly refer to when you do your security investigation, to find security issues and remediate and prioritize. What happened after so many years is, this database became automated, and now provided to customers such as large SOC teams who have many areas going on. But they need information data to be able to pinpoint a rapid remediation or prioritization to know what to look for.

And vFeed is helping large SOC teams doing exactly that, because large SOC teams need to focus on their infrastructure. We don’t want to spend our time go looking for all the sources that would help them to fix it. They can rely on vFeed to maintain the most comprehensive and accurate database, to help large or even small SOC teams focus on the issues we have at hand, which is already a big task.

They don’t need to go and maintain these databases. We do it for them, we are part of their team, they can trust us. And we only do that, we only maintain the database. We are a pure player in that space, we don’t want to do anything else. We were doing other things in the past, but to be the best at what we do, we need to stay focused. So, a small team at vFeed is doing that and only that.

vFeed

Like I said before – who can use it are the SOC team, or security team, who already are doing the job of looking for threats, looking for incidents. And of course, once they’ve found the incident, they need to have information to help them remediate as soon as possible, and make sure they are working on the most important issues. That’s what vFeed is helping them to do, by providing them the best data that exists.

When you don’t have a SOC team and you don’t have solutions, it’s going to be difficult to ingest vFeed data. You need to have that, since we provide only this database, which is, we are hiding the complexity of going and fetching these data sources and putting them in aggregate form, with all the correlation that you need to do to make it a nice format for you to consume.

You can find more information on our website – vfeed.io. You will find different use cases, the names of our customers as well, some of them have agreed to put the names on, and you can understand what type of data we are. We also give a free trial, people can of course try before they buy, it’s clear.

Every day there are new vulnerabilities, and every day we have a new update, new information, and that’s what we provide.

Why we need to secure IoT connections sooner than later

IoT products offer many conveniences but there are massive amounts of data being transferred to and from these services vulnerable to attack if left unsecured. In this podcast, Mike Nelson, Vice President of IoT Security at DigiCert, talks about the growing insecurity of IoT devices and what we should do about it.

secure IoT connections

Here’s a transcript of the podcast for your convenience.

Hey everyone, it’s Mike Nelson. I’m the vice president of IoT security at DigiCert. DigiCert is the world’s leading provider of PKI products and services. And we’re here today to talk about an interesting topic and a topic of growing importance.

As most of us are aware, connectivity is growing all around us. Businesses are becoming more connected. In recent days, more and more employees of businesses are working from home and needing secure connectivity. The number of IoT devices continues to grow in mass amounts globally. Those devices are creating a lot of connectivity around us, but that connectivity also creates a lot of security risk and exposure that a lot of consumers who are going about their normal lives are not aware of.

What I’d like to talk about today is the growing need to secure that connection, whether that be within a business, whether that be with a connected device in a consumer’s home, or whether that be a consumer browsing the internet. All of that connectivity needs to be done in a way that is secure, and the importance of that is so critical.

My expertise of course is in the IoT space. I’ll be focusing the majority of the discussion today around IoT. But we’ll also talk a little bit about businesses and the importance of securing the internet.

In the IoT space, it’s projected that billions and billions of devices, up to 43 billion devices will be in the market in the coming years. Those devices are collecting sensitive data, they’re providing critical business functions, they’re providing healthcare monitoring, and even performing healthcare procedures. Those devices are critical to the function of our society and they will grow in their importance. A lot of those devices are, as I mentioned, insecure, and the risk of those devices being attacked is very real and provides some scary consequences.

If we think just about the volume of data that these devices are collecting, it’s estimated that nearly 80 zettabytes of data will be generated in IoT over the coming years. That’s equivalent to about 90% of the data that has been generated globally to this point. So, mass amounts of data is being generated. A lot of that data should be handled in a confidential way. Some of it is proprietary information for businesses, it’s patient health information, it’s the secret sauces of business that they want to keep confidential. And so, as that data grows, it’s incredibly important that we keep it secure.

As we look at IoT exploits that have happened up to this point, there are some common vulnerabilities that we see over and over in these attacks. And those common vulnerabilities are really a good starting point for implementing security practices. The first common vulnerability that we see with IoT is lack of proper authentication.

We read a lot about bad password practices, and hard-coded credentials, and hackers being able to gain access because they go in and they are able to discover the password and the user manuals of IoT – IoT instruction manuals. Bad authentication is one of the greatest risks right now with IoT. And there’s a lot going on in that regards to improve that.

In addition to bad practice of passwords, the backend connections and making sure that anything the devices are connecting to is properly authenticated. If your device connects to a server or a piece of middleware, you want to make sure that that connection is authentic so that it doesn’t trust connections of parties that you don’t want gaining access to your device. And so authentication of both the user, but also the backend connections need to be of utmost importance.

The second common vulnerability that we see is around protection of the data. Palo Alto Networks recently released a report that said 98% of IoT data traffic is unencrypted. That’s a terrifying statistic, especially considering the volume of data that I mentioned earlier and the sensitivity of that data. And we see data compromise very frequently when it comes to IoT attacks. That’s another very common vulnerability – the data not being handled in a confidential way.

The third and final one is around integrity. How do you know that the packages being sent to the device are coming from a trusted source, and that a man in the middle attack has not occurred, modifying the value or embedding malware in the package and then sending it onto your device. Integrity is so critical, especially when businesses are making decisions related to that data, or when doctors are using the data to make treatment decisions for patients. And so integrity, the importance of making sure that the values of that data have integrity associated with them, is very critical.

And so what do you do? The starting point, I’m asked all the time: “So where do we start? What do we need to do as we venture down the path of IoT security?” I think public key infrastructure and the use of digital certificates is really the right starting point for good IoT security.

Public key infrastructure infrastructure facilitates security solutions around those three vulnerabilities. Through the use of digital certificates, you can authenticate connections. Through the use of a certificate, you can place one on an endpoint device and you can place one on a server that it connects to. And when that connection occurs, that session is authenticated through the use of certificates. And then, the second thing that it can do is it can then encrypt the data that’s being passed from point A to point B. Public key infrastructure and those digital certificates are what can help facilitate that for manufacturers.

The third one is public key infrastructure through the use of digital signatures and certificates. Code signing, digital signature checks are very important to ensure that there is integrity, and public key infrastructure can facilitate that as well.

We’re asked all the time where do we start? What do we do? I really think that public key infrastructure is a great place. In addition to that, security by design is critical. Penetration testing is a very important element of secure IoT development. And all of those things, there’s really no silver bullet, but I think that as you’re starting down the path, those are things that are good starting points.

The current state of IoT security really is, I think, scary. I don’t think enough is being done. I’m asked frequently also who’s responsible for IoT security. I’d say that there’s three pillars of responsibility. There’s regulatory responsibility. We see governments, the US, the UK, Japan are moving to put in place regulations that will require manufacturers to act responsibly in the development of their products.

The second pillar I would say would be industries. We see a lot of industries coming together to create security standards, and then holding the manufacturers in that industry accountable to those standards to make sure that they’re operating at a higher level of security.

We’ve seen that DigiCert participated in the handful of industry groups like CableLabs who represents all cable manufacturers, set-top boxes, and they created a security standard for all of their manufacturers to follow. OCF is an industry group, the Open Connectivity Foundation, that is responsible for consumer electronics and they’re building standards for that ecosystem. Industries also have a responsibility to come and try to improve the overall state of security for their industry.

And then the final pillar would be manufacturers, and manufacturers doing the right thing in the product development, in the deployment, in the lifecycle management of their devices is very, very important.

As we look at public key infrastructure and the challenges, public key infrastructure is the technical solution that a lot of people know a little bit about, but they don’t know a lot about. As we have seen manufacturers approaching public key infrastructure, we’ve seen them fall into a handful of common pitfalls and challenges. I’ve had hundreds and hundreds of conversations over the last few years with manufacturers as they have looked to implement public key infrastructure, and we’ve really heard some common challenges that they run into as they’re looking to stand up a public key infrastructure.

The first one is the flexibility of their platform and being able to solve the variety of challenges that are needed. And I say laughingly that every IoT deployment is just another unique IoT deployment. Every device is different, the communication protocol’s different, the computation power is different and a unique lens needs to be looked at every instance. And so having a platform that’s flexible, that allows them to solve all of their challenges instead of just particular ones, is important.

Deployment ease. We see a lot of them run into challenges in deployment configuration. We see challenges with them and complying with country requirements. Having third-party integrations with applications that they want to use as a business is another common challenge. And then finally, degradation of performance is another challenge that we see manufacturers run into.

I encourage people all the time when they’re looking for a solution, to look for a solution that has that flexibility, that has the scalability, that can help them meet the in-country requirements that they’re struggling to do. And I think that if manufacturers get off to a good starting point in those areas, it sets them on a path for success.

OPIS

DigiCert IoT Device Manager

At DigiCert we have built a custom platform that’s responsive in those areas. We just released a platform called DigiCert ONE, and it really is the most modern architecture for PKI, but it addresses those really complex challenges of flexibility and scalability, reliability. The flexibility, not just of the certificate profiles, but also in the way you deploy it. Do you need an on-premise instance of a PKI? Do you need it based on the cloud or do you need a hybrid solution? You need to be able to be flexible in the way that you deliver and stand up your public key infrastructure.
I think that touches on a lot of the points that I wanted to cover today. I hope that this discussion has been helpful and insightful.

Public key infrastructure is a technology that works. It’s proven, it’s standardized and it’s been around for a long time. It still needs to be innovative, and it’s important to make sure that the solutions that you’re putting in place are modern, will meet the requirements for your team today, but also will meet the requirements for your team when you have many, many more connections that you’re trying to secure.

Thank you all for your time today. I hope this has been helpful.

Increase web application security without causing any user disruption

In this podcast recorded at RSA Conference 2020, Jason A. Hollander, CEO, and Paul B. Storm, President at Cymatic, talk about how their platform builds a defensible barrier around the user, so web-based threats can be stopped at the source.

increase web application security

Here’s a transcript of the podcast for your convenience.

Welcome to the Help Net Security podcast. In this edition, I’m joined with the guys from Cymatic. Can you please introduce yourselves?

My name is Paul Storm. Good morning or good afternoon, wherever you are. And my name is Jason Hollander and we’re both the co-founders of Cymatic.

Can you tell me what is Cymatic’s approach to web security and what differentiates you in the marketplace?

Paul: Sure. I guess I could take it Paul. We built a web application defense platform that’s able to identify, basically calculate risk, and also really understand users from inside of the web application. Think of it as almost like a client-side WAF.

Jason: When you think about web application defense, it includes a lot of different silos of technology. And one of the things that we want to do is build a technology that brings the silos together. That way they can leverage each other to make smarter decisions. And we’re able to bring that as close to the user as possible.

If you think about web application threats and breaches, you want to catch it left of boom. A lot of the technology today is either at boom or right of boom because it’s in the network. We push it straight out to the user. It’s invisible. We surround the browser or the mobile device, and therefore we’re able to eliminate a lot of the threats, to see the threats that a lot of technology cannot, because they create silos.

increase web application security

In browser automation detection

Your website mentions next generation pre-endpoint protection. Can you tell our listeners exactly how does it work?

Paul: I’ll preface this by saying we are a young startup and messaging is something we’re still working on. For the world of startups out there, it’s an iterative process.

How does it work? There is a line of JavaScript that gets embedded into an implementer’s header tag. We don’t mind if it’s an internal site, an external site, internal users, contractors or consumers landing on that site. Once an entity lands on the site, they don’t need to authenticate, they don’t need to be registered users, and they don’t need to be logging in from a controlled device.

We open up a socket from that browser session, back to our Cymatic cloud. Then everything’s streamed on a real-time socket. One of our micro services will pick up on the feed or speed and only grab the elements that it needs to provide visibility, control or identify risk.

Jason: One of the things we did, Paul mentioned it, it’s got this real-time socket connection back to our cloud, which does all the AI and ML part of our product. A lot of technology is stateless. Once they do their job, they fall off. We’re stateful, we see it the moment someone lands on a web application, to the moment that they end their session – we’ve got complete visibility and complete control over that.

increase web application security

Identity assurance

If we take a look at the current cybersecurity trends in the industry, we see account takeover attacks have been rising steadily in the last year. So, how does your company help organizations with this type of attacks?

Paul: When we first created this, ATO was the first thing that we were trying to attack, probably the wrong choice of words, but to stop. We are basically looking at users based upon user behaviors. When you do that and you’re looking at not just a single vector, it’s really able to identify not just ATO risks, which is a definite problem, but all the other issues that come with an externally facing property. It’s a lot more holistic than just going after ATO. We’re identifying bots, IP risk session based threats. It’s not just looking at specifically ATO.

Jason: I’d say with ATO, a lot of technology just looks at “I want to stop ATO by blocking automation”. You have a lot of bot mitigation products that do that. You could stop ATO by having a multifactor. There’s a lot of ways to try to prevent it. But again, our approach is taking these silos, that are blind, to really understanding if that bot is a real risk and combining that with the verification of the user, combining that with identification of their interactions to other users, environmental things, credential hygiene that the user might have.

Let’s try to be preemptive to stopping attacks. If you think about an ATO, typically it’s from a credential breach. But what if those credentials have been breached but they haven’t been exposed publicly yet? Now how do you determine that? How do you determine if someone has poor credential hygiene?

The credentials might not have been breached, a company might not have been breached, but they are a risk to the organization because they possibly share credentials with other people. Or maybe they leverage those credentials, their corporate credentials, which should be a lot more secure and controlled, on social networks. Our technology, because of where we sit and the visibility we have, we’re able to triangulate that. And that provides better visibility and an indication of possible breach to an organization, than a lot of the technologies that, again, are after boom.

increase web application security

Compliance-driven reports and analytics

Your platform provides compliance, written reports and analytics. Can you tell us more about it?

Paul: All these bits of information are being streamlined. We could either visualize that through our own dashboards or you can basically tie it into a SIEM or SOC. With all of this data, we’re able to look at things from an actual compliance side of things, especially on devices that aren’t managed.

Jason: For us, the product has always started with visibility cause what you can’t see, you can’t control or manage. It’s hard to just turn on switches and turn knobs, and those types of things. One of the things that we do right out of the box, and because it deploys so easily, like Google Analytics, it’s just as line of JavaScript. Basically, within seconds we light up where those visibility gaps are. That’s one of those reports that security practitioners, or board members, the stakeholders around that organization’s security posture can look at and say: “All right, this tool has been running. Where do we have these gaps in our current technology set?”

And once they recognize that, then Cymatic can start remediating that on the fly. So, we do a lot of auto remediation. We don’t have a lot of roles because once you start flipping switches, turning knobs, and you don’t really understand how the product is making decisions, it actually increases your risk. We try to take all that decision making out of the organization, put it into the product, let the product remediate. And then over time you see these gaps start settling and going away.

Also, out of that is these reports that provide organizations insight to their probability of being breached. Cause today, if you ask CISOs or anyone in security: “Do you really understand the probability that your organization could be breached?” Typical answer is no, they just don’t, with the current toolset. At Cymatic, we can print out this report and they can say: “Okay, I feel not that they’re not going to get breached, but I feel a lot more confident in our ability to defend against any attack that we might have.”

To find out more information, please visit our website at cymatic.io and you can always contact us. There’s information on there too. If you want to learn more about Cymatic or see a demo and how it can help your organization.

Exploring the risky behavior of IT security professionals

Almost 65% of the nearly 300 international cybersecurity professionals canvased by Gurucul at RSA Conference 2020 said they access documents that have nothing to do with their jobs.

risky behavior security professionals

Meanwhile, nearly 40% of respondents who experienced bad performance reviews also admitted to abusing their privileged access, which is double the overall rate (19%).

“We knew insider privilege abuse was rampant in most enterprises, but these survey results demonstrate that the infosecurity department is not immune to this practice,” said Saryu Nayyar, CEO of Gurucul. “Detecting impermissible access to resources by authorized users, whether it is malicious or not, is virtually impossible with traditional monitoring tools. That’s why many organizations are turning to security and risk analytics that look at both employee and entity behaviors to identify anomalies indicative of insider threats.”

Key findings:

  • In finance, 58% said they have emailed company documents to their personal accounts.
  • In healthcare, 33% have abused their privileged access.
  • In manufacturing, 78% accessed documents unrelated to their jobs.
  • In retail, 86% have clicked on a link in an email from someone they didn’t know.
  • In midsize companies, 62% did not alert IT when their job role had changed.

risky behavior security professionals

This showcases the problems organizations have with employees behaving outside of the bounds of practical and published security policies. The human element is often the deciding factor in how data breaches occur. Monitoring and deterring risky employee behavior with machine learning based security analytics is the most effective measure in keeping mayhem to a minimum.

People may not realize their behavior in opening the door to cybercriminals, which is why security analytics technology is so critical to maintaining a secure corporate environment.

How organizations can maintain a third-party risk management program from day one

In this podcast recorded at RSA Conference 2020, Sean Cronin, CEO of ProcessUnity, talks about the importance of third-party risk management and how companies can get started with a proven process that works.

third-party risk management program

Here’s a transcript of the podcast for your convenience.

We’re here with Sean Cronin, CEO of ProcessUnity. Can you tell me about the company and what kind of services and products do you offer?

First off, it’s great to meet you. Thanks for taking the time with us. At ProcessUnity we have a governance risk and compliance platform that’s a SaaS-based platform. Our flagship product is a vendor risk management product that really focuses on third-party risk and vendor management.

These days, certainly a lot of heavily regulated industries, financial services firms, healthcare firms, pharmaceutical firms, are starting to be concerned with who their vendors are, their suppliers, their third parties, and how their data is either exposed or how they’re using that data. We help them understand that relationship.

At this point we’re growing quite rapidly because it’s certainly a hot space. We were talking earlier, third-party risk is kind of becoming a first type of priority for a lot of organizations. And now we’re seeing organizations, that aren’t as heavily regulated, start to say: “It makes sense for us to understand who we’re doing business with”. And then as we expand that footprint, we also help folks in other risk pillars, things outside of third-party risks like policies and procedures, contract management. We’re doing more in tangential areas of third-party risk.

Tell our listeners why should companies be paying attention to third-party risk management?

Third parties certainly are having a lot to do with data breaches these days. You read any study, Deloitte, Ernst & Young, any of the unbiased studies out there, a number of the data breaches are actually coming from third parties and vendors, so that we recognize that you might have your four walls or your firewalls under control, but what you’re doing with other vendors and other folks in your supply chain, certainly puts your data at risk. We think that’s certainly important.

A lot of these heavily regulated industries are actually getting audited and examined to understand how they understand the ecosystem of third parties. But we’re also seeing it go down-market. Not just the heavily regulated industries, but other areas and other verticals are starting to really think about how they interact with third parties, what data they’re sharing, and also what kind of value they could get from those third parties.

Are they understanding the metrics, the measurements that they measure those vendors on? Are they getting what they paid for? Are they getting the level of performance they expect? And because of that, I think we can optimize a lot of those relationships and help them better understand that ecosystem in which they behave.

third-party risk management program

Well It sounds like a really popular industry. So, how do you see ProcessUnity differentiating itself in the market?

First and foremost, we really took our time to hire subject matter experts in our industry. We’ve got lots of practitioners that have years and years and years of governance risk and compliance expertise. They’ve run third-party risk programs for some of the largest banks and financial institutions in the world. They’ve run risk programs at heavily regulated industries. Our people, first and foremost, is a huge differentiator.

Number two, our products. It’s incredibly configurable, incredibly easy to use. But that’s such a common thing that folks claim. I actually like to say it’s easy to administrate. Some of the platforms that, if you will, we compete with. You can do those things, but you need to pay IT developers or other developers or even the company that you purchase the system from, to configure it for you. From our perspective, we like to empower our clients to really run the programs and configure the applications on their own. And so, from that perspective, I like to say, ease of administration.

It’s also easy to use. First and foremost, not just for our clients, but for the vendors. So, think about it, if you’re an important vendor in a vertical like financial services, you’re getting a million of these questionnaires. Wouldn’t it be nice if when you came in, it was a simple to follow survey that you can click and add policies and procedures, and connect everything really simply and easily?

And that’s one of those things that I get proud about because every once in a while some of my large clients say “hey, I just got this email from a vendor” and they said “hey, we just filled out your questionnaire, and it was one of the easiest systems to use”. And I think from my perspective, that’s us being good stewards to our compadres out there who are vendors. We’re a vendor too. We’re helping eliminate vendor fatigue because it just makes it easier so that people want to go in, fill in their information, be more proactive with their end user, and actually provide the right information. That’s a point of pride for me, certainly, that vendors and other third parties kind of like filling out the information and find it very intuitive.

Given all that, what kind of advice would you give to a company who is looking to start a third-party risk management program?

First and foremost, whether it’s with our product or other products, think about why you’re doing it, right? Think about what you’re doing with your third-party risk program. What you want to accomplish. And a lot of people used to tell me from a governance, risk and compliance perspective: “I’d like to get through my examination. That’s not good enough. Tell me what you want to understand.” And some of my CISOs, CIOs, chief procurement officers say: “We’d like to have a geographical representation of our vendor population. Let’s look at what the geographical concentration looks like. Let’s look at the vendor inventory. Do we have overlap? Do we have too many vendors in one particular area or third parties in one particular area, where we could unify with the best practices?”

I just highlighted best practices. Go with somebody who understands the third-party risk challenges. Nowadays, a lot of folks, because it’s such a hot space, people are saying: “Oh, I do third-party risk!” But when you dig a little deeper, you find that they don’t have the depth of expertise. And so you want somebody that you want to partner with to really be able to bring best practices to bear on your organization. Because if you’re a very mature organization, we have a really powerful product and we can configure it to your exact use cases and we can make it work for you.

third-party risk management program

But what happens if you’re a little bit more immature in this vendor management and third-party risk area? Well, don’t worry. We’ve got a best practices product. It’s actually a kind of a turnkey solution, which will really already have preconfigured workflows, use cases, all of the user roles, all of the questionnaires set up for you. So, if you just want to get started and you want to have a more prescriptive best practices capability at your disposal, we can help with that as well.

If you’re just starting out in this area, I would say take a look around. It’s important. And then really look at the folks that you’re trying to work with and the depth in which they understand the vendor risk management and third-party risk management area. And if it makes sense, an out of the box program like we have is a really great start.

What’s most important about the out of the box product, and my product strategists and product managers always make me promise to say this – it’s prebuilt, but you can configure and make it better. So, you can mature it as your own program gets that level of maturity, that takes it to the next level.

Thank you for the insights Sean! Is there anything else you would like to share with the Help Net Security audience?

I think at the end of the day, I touched on it earlier, third-party risk is a first world priority, it’s a first type of risk priority. It’s no longer “nice to have”. The reality is, it’s never going to go out of style to understand who you’re doing business with and where your data, your customer data, if you’re in healthcare, your patient data is.

So, understanding that ecosystem, understanding how you interact with those third parties is really important. So, we just stress that. Think about it, whether you’re in a heavily regulated industry or a different vertical, it’s really important to think about what that ecosystem looks like.

I would just like for you to finish by inviting listeners to come to your website for information about your products and solutions. Just give the URL and invite them to come to the website.

Certainly we appreciate your time and welcome everyone to come visit our website – www.processunity.com. We have lots of information, materials to help you understand the space. It’s educational as well as it certainly has lots of information about all our product offerings and certainly a lot of our use cases from our clients and other areas.

The human element in security is still needed to combat application vulnerabilities

While over half of organizations use artificial intelligence or machine learning in their security stack, nearly 60 percent are still more confident in cyberthreat findings verified by humans over AI, according to WhiteHat Security.

human element security

The survey responses of 102 industry professionals at RSA Conference 2020 reflect the need for security organizations to incorporate both AI- and human-centric offerings, especially in the application security space.

Three-quarters of respondents use an application security tool, and more than 40 percent of those application security solutions use both AI-based and human-based verification.

AI and machine learning have provided several advantages for cybersecurity professionals overall the past several years, especially in the face of the technology talent gap, which has left 45 percent of respondents’ companies lacking a sufficiently staffed cybersecurity team.

More than 70 percent of respondents agree that AI-based tools made their cybersecurity teams more efficient by eliminating over 55 percent of mundane tasks.

The benefits of the human element in security

Nearly 40 percent of respondents also feel their stress levels have decreased since incorporating AI tools into their security stack, and of those particular participants, 65 percent claim these tools allow them to focus more closely on cyberattack mitigation and preventive measures than before.

However, a majority of respondents emphasise there are skills that the human element provides that AI and machine learning simply cannot match. Despite the number of advantages AI-based technologies offer, respondents also reflected on the benefits the human element provides security teams.

Thirty percent of respondents cited intuition as the most important human element, 21 percent emphasized the importance of creativity, and nearly 20 percent agreed that previous experience and frame of reference is the most critical human advantage.

“With the growing cyberthreat landscape, it is imperative for security tools and organizations to have a combination of both AI and the human element so there can be continuous risk evaluation with verified results,” said Anthony Bettini, CTO at WhiteHat Security.

“For all its advantages, AI is still heavily reliant on humans to be successful. Human monitoring and continuous input are required if AI software is to successfully learn and adapt. This is why the human element will never be completely eradicated from the security process.”

(IN)SECURE Magazine: RSAC 2020 special issue released

RSA Conference, the world’s leading information security conference and exposition, concluded its 29th annual event in San Francisco.

(IN)SECURE Magazine RSAC 2020

More than 36,000 attendees, 704 speakers and 658 exhibitors gathered at the Moscone Center to explore the Human Element in cybersecurity through hundreds of keynote presentations, track sessions, tutorials, seminars and special events.

Download the special issue of the magazine here.

DNS over HTTPS misuse or abuse: How to stay secure

Firefox and Chrome have recently begun supporting external DNS resolvers in the cloud. The use of these DNS services bypasses controls that enterprise IT organizations put in place to prevent end users from visiting unauthorized Internet destinations.

Compounding the issue is that certain operating systems and browsers use new encryption technologies – DNS over TLS (DoT) and DNS over HTTPS (DoH) – in the query response handshake with these unauthorized DNS services that make them harder to block.

DNS over HTTP abuse

In this podcast recorded at RSA Conference 2020, Srikrupa Srivatsan, Director of Product Marketing at Infoblox, talks about these trends and what you can do to safeguard your enterprise environment.

Here’s a transcript of the podcast for your convenience.

Hello, I’m Srikrupa Srivatsan, Director of Product Marketing at Infoblox. Today I’m going to talk about DNS over HTTPS misuse or abuse. You might’ve heard or it’s been in the news recently about the use of DNS over HTTPS, or DNS over TLS to improve privacy of DNS communications.

DNS, if you look back when it was first invented, it was not created or built with security or privacy in mind. It’s an open protocol, it’s a very trusting protocol, and it’s fundamental to the internet. We use it to access websites, we use it for email, you name it, anything that happens online uses DNS. But because it wasn’t built with security or privacy, what happens is the actual communication between your device, let’s say a laptop or an iPad or whatever it is, to the DNS server is open. If anybody is snooping, they’ll know exactly which websites you’re accessing.

What that means is there’s a little bit of user privacy issue when somebody does that. To counter that, what’s happened is there are two new developments in the market, DNS over HTTPS and DNS over TLS. These are meant to encrypt communication between the endpoint and your recursive DNS server. While the intention is good and it’s a perfect use case for consumers – at home, accessing websites from Starbucks, you don’t want random guys knowing where you’re going to, things like that. It could be attackers, things like that.

DNS over HTTP abuse

But in an enterprise setting, when you use DNS over TLS or DNS over HTTPS, it causes security issues. When I say security, what I mean is, because in DNS over HTTPS or DoH, as it’s called, the DNS queries are encrypted and sent over the HTTPS protocol, which means the enterprise DNS server does not see that request at all. It’s completely bypassed.

When your enterprise DNS server is completely bypassed, your IT admin has an issue. He no longer controls your access to the outside world, to the internet. He no longer knows whether you are in compliance with the company’s security policies, whether your device is secure enough.

Let’s say your security admin has put in some controls on the DNS server to detect things like data exfiltration or malware, C&C communications. Now, when the internal DNS server is bypassed, that security is lost for the user. It’s fine to use though in a kind of consumer setting, but when you think about an enterprise, you want to make sure that you are in control of where the users are going, you have visibility into where users are going, and you’re able to secure where your users are going, those connections.

For that, what we suggest or what we recommend as a best practice, is to make sure that, number one, is avoid using DoH resolvers, because these resolvers are sitting somewhere in the internet. They are not authorized by your company or the enterprise’s security admin or the IT admin. So, you’re kind of connecting to things that they are not approving or they’re not authorizing.

DNS over HTTP abuse

It’s becoming an issue these days because browser companies like Mozilla, just today, they announced a press release saying that they are by default making sure that all Firefox browsers have DoH enabled. If you use Firefox, automatically DoH is enabled. That’s the default setting. Your device, your laptop can send its DNS queries to a DoH resolver somewhere on the internet.

We see this trend by certain companies to default to DoH. It is a little bit dangerous because you’re bypassing your internal DNS and you’re bypassing security controls. What we suggest is to make sure that you’re using an internal DNS solution that can detect DoH and prevent these browsers from using DoH.

And you guys may already know, Infoblox is in the DNS security space. We have a solution called BloxOne Threat Defense that provides foundational security for things like detecting malware, C&C communications, from your laptops or any devices. It detects data exfiltration over DNS. DNS is constantly used to send out data because your DLP solutions or next-gen firewalls do not inspect DNS. It’s a great backdoor to exfiltrate data. We can detect and block that.

DNS over HTTP abuse

BloxOne Threat Defense Business On-Premises integrates with the entire cybersecurity ecosystem

And then we also have now added capability to prevent use of DoH in an enterprise setting, and make these browsers fall back gracefully to your internal DNS, so that the IT admins and security admins are retaining control. Even if you bring in a device within your enterprise network and you are using Firefox, it’ll fall back to the internal DNS. It will not connect to the DoH because we have certain feeds in our solution that enables that. We have DoH feeds that enable that.

Definitely that’s the best practice that we recommend to all our customers, and in general enterprise companies. It could be education, financial services, government, retail. It doesn’t matter what type of company you are. If you are an enterprise that has a lot of users and you want to make sure that you retain control and retain visibility of where they’re going, and you want to make sure your company’s policies are enforced, we highly recommend blocking these type of DNS implementations.

With that, I hope I gave you a little bit of an idea of the downsides or downfall of using things like DoH, and making sure that you are enabling your DNS server to be as secure as it can be, with solutions like BloxOne Threat Defense from Infoblox.

If you need more information on our DNS security solutions, you can visit www.infoblox.com. We do have a solution, like I mentioned, called BloxOne Threat Defense that provides robust foundational security at the DNS layer. Thank you.

What is open threat intelligence and what is driving it?

In this podcast recorded at RSA Conference 2020, Todd Weller, Chief Strategy Officer at Bandura Cyber, discusses the modern threat intelligence landscape and the company’s platform.

open threat intelligence

The Bandura Cyber Threat Intelligence Protection Platform:

  • Aggregates IP and domain threat intelligence from multiple sources including leading commercial providers, open source, government, and industry sources.
  • Integrates IP and domain threat intelligence from any source in real time including from Threat Intelligence Providers & Platforms (TIPs), SIEMs, SOARs, endpoint, and network security solutions.
  • Acts on IP and domain threat intelligence proactively filtering network traffic in real-time at near line speed.

Here’s a transcript of the podcast for your convenience.

We are here today with Todd Weller, Chief Strategy Officer of Bandura Cyber. First question for the podcast, Todd, what is open threat intelligence and what is driving it?

It’s a great question. Let’s start with the latter point, what’s driving it. What we’re seeing is all organizations of all sizes and sophistications are increasing their use of threat intelligence. And what’s driving them to do that is the threat intelligence you get in your existing security controls alone is insufficient. And the reason that is, is that threat intel tends to be proprietary, driven by the vendor, driven by their threat intelligence team, further fueled by what they see within their customer bases. And what organizations are finding is they need a broader view of threat intelligence. It’s got to span multiple commercial sources, open source industry, and government. That’s really what is driving this movement, a desire to have a broader and more open view of threat intelligence.

The first question, what is open threat intel? That’s a great question. I actually googled it, coming in, and what you find is a lot of the results are open source threat intelligence, and they’re not exactly the same, but there are some similarities between those concepts. If I summed it up from a characteristic perspective, open, right? It’s not controlled by any one entity. There’s a community approach, anybody can contribute. And that ties importantly into a big team of collective cyber defense. We can’t do things alone.

The second would be flexible. It’s threat intelligence that can easily change. You can use the threat intelligence you want. And then I think the third characteristic of open threat intelligence is it’s portable. This threat intel is easy to move, it’s easy to integrate into your environment anywhere you choose.

That’s a really interesting distinction. I think the next question that leads out of that is why is threat intelligence hard to integrate into existing security controls?

There are two key factors there. It starts with the fundamental point that many of those solutions are closed, as I mentioned, so there’s an inherent bias. The value that those solutions provide is their ability to detect and block threats. And again, they do this through their own proprietary threat intelligence, so that powers their core value proposition. There’s really not an incentive to share that or to be open. There’s also not an incentive to really want to incorporate others’ threat intelligence into your solution. That’s the first factor.

The second factor, I would say, is technology limitations. Again, those solutions are built to do a certain thing. We tend to play or get more exposure on the network security side of the fence. And if you look at next generation firewalls, for example, they’re architected to be a firewall and today they’re doing much more than being a firewall. They’re doing intrusion prevention. They’re doing deep packet inspection and other areas of URL. They’re doing sandboxing and you add on increasing encrypted traffic on top of that. They’re doing a lot already that’s putting a lot of burden on the resources of that solution.

There are just significant limitations as a result of that. Many next generation firewalls simply limit the capacity of third-party threat intelligence that you can put into it. Another kind of factor we’ve seen, even if you take away the capacity limitations, policy management in lot of cases for next generation firewalls is cumbersome. That’s another kind of a limitation there.

Todd, going back to open threat intelligence, how would you say the industry is responding to open threat intelligence as a movement?

I’ve seen two fronts there, two responses. First has been a few years back. You saw some of the vendors band together with what is called the Cyber Threat Alliance, which continues to persist today. I think Palo Alto was a key founding member there, Palo Alto Networks, Symantec, and I’m sure there’s others. The goal there was to be able to share threat indicators back and forth.

I think that’s had limited success. Frankly, we don’t hear a lot about Cyber Threat Alliance and actually preparing for this, I was like, does it still exist in all honesty? And again, it goes back to those vendors all trying to provide protection solutions that’s fueled by their own threat intelligence. While it’s nice to say on paper, these big companies are going to share, there tends to be a lack of incentives to do so.

I think you’ve also seen vendors, specific vendors, make moves to try to enable the integration of third-party threat intelligence, to try to make their systems more open. There are some examples of that I would highlight. Palo Alto Networks has an open source project called MineMeld, which will aggregate threat intelligence from multiple sources, they’re helping to automate that.

I think McAfee has been pretty progressive with what they call their DXL, which is a way to tie together not only the whole McAfee portfolio of solutions, but also to make it easy for third-party solutions like ours to integrate in. And then the other dimension you’d have here is the security orchestration, automated response (SOAR) players. They’re trying to facilitate that movement of threat Intel between disparate systems.

open threat intelligence

The challenge with that approach gets back to, again, the limitations of the controls themselves. So, if we take, not to pick on Palo Alto Networks, but they are the market leading firewall provider, right? And they have made moves to do this aggregation of third-party threat Intel. It doesn’t get over the fact that you can only put a small number of third-party indicators into a Palo Alto Networks firewall. So, whether that’s being done by MineMeld, or whether it’s done being a SOAR, there’s just a significant limitation.

When it comes down to it, the two biggest issues are theses bias towards proprietary detection, which takes away incentive to open up. And then again, the architectures of those solutions are full, they’re geared to doing what they’re doing.

You mentioned Bandura Cyber being integrated into some of those other products and solutions. Tell us what is Bandura Cyber’s role in the open threat intelligence movement today?

Being open is at the core of everything we do, right? So, we offer what we call the Threat Intelligence Protection Platform. There we aggregate threat intel from multiple sources. We’re partnering with many commercial threat intelligence providers. We’re pulling in open source; we’re pulling in government industry through ISEC, ISAO integrations.

For us, we don’t produce our own threat intelligence today, we’re not dependent on that. We’re partnering, we want customers to be able to use the threat intelligence they want. And so we’re taking a proactive step to aggregate and to deliver threat intel out of the box from leading providers and all those sources. But then we’re also integrating threat intelligence from any source. If you’re a sophisticated customer, and we do see large enterprises spending millions and millions of dollars on threat intelligence feeds from all these sources, and then a lot of those will look at a solution like ThreatQuotient, to aggregate those, the threat intelligence platform.

open threat intelligence

We’re doing integrations like that. We’re partner with ThreatQuotient, we’re partner with Anomali, we’re partner with Recorded Future, ThreatConnect, SOAR, SIEM systems are going to be important integrations. And then the critical piece is acting on threat intelligence.

We aggregate, we integrate, but then we’re taking that action piece and that’s where I think it becomes very interesting for us. And you can think of us really as an open threat intelligence enforcement platform. So again, we’re going to be able to take action on threat intelligence from any source. We’re not biased to our own threat intelligence and we want to be open and flexible, but it doesn’t mean over time we’re not going to also have some of our own threat intelligence, but it’s not going to take us away from the heart of what we’re about, which is open and flexible threat intelligence. Let the customer use what they want, because cyber is dynamic and great sources of threat intelligence today are going to be very different than what they are tomorrow, and five years from now, and 10 years from now.

Automate manual security, risk, and compliance processes in software development

The future of business relies on being digital – but all software deployed needs to be secure and protect privacy. Yet, responsible cybersecurity gets in the way of what any company really wants to do: innovate fast, stay ahead of the competition, and wow customers!

SD Elements

In this podcast recorded at RSA Conference 2020, we’re joined by Ehsan Foroughi, Vice President of Products from Security Compass, an application security expert with 13+ years of management and technical experience in security research. He talks about a way of building software so that cybersecurity issues all but disappear, letting companies focus on what they do best.

Good morning. Today we have with us Ehsan Foroughi, Vice President of Products from Security Compass. We’ll be focusing on what Security Compass calls the Development Devil’s Choice and what’s being done about it. Ehsan tell me a little about yourself.

A brief introduction: I started my career in cybersecurity around 15 years ago as a researcher doing malware analysis and reverse engineering. Around eight years ago I joined an up and coming company named Security Compass. Security Compass has been around for 14 years or so, and it started as a boutique consulting firm focusing on helping developers code securely and push out the products.

When I joined SD Elements, which is the software platform and the flagship of the product was under development. I’ve worn many hats during that time. I’ve been a product manager, I’ve been a researcher, and now I own the R&D umbrella effort for the company.

Thank you. Can you tell me a little bit about Security Compass’ mission and vision?

The company’s vision is a world where people can trust technology and the way to get there is to help companies develop secure software without slowing down the business.

Here’s our first big question. The primary goals of most companies are to innovate fast, stay ahead of the competition and wow customers. Does responsible cybersecurity get in the way of that?

It certainly feels that way. Every industry nowadays relies on software to be competitive and generate revenue. Software is becoming a competitive advantage and it drives the enterprise value. As digital products are becoming critical, you’re seeing a lot of companies consider security as a first-class citizen in their DevOps effort, and they are calling it DevSecOps these days.

The problem is that when you dig into the detail, they’re mostly relying on reactive processes such as scanning and testing, which find the problems too late. By that time, they face a hard choice of whether to stop everything and go back to fix, or accept a lot of risk and move forward. We call this fast and risky development. It gets the software out to production fast, by eliminating the upfront processes, but it’s a ticking time bomb for the company and the brand. I wouldn’t want to be sitting on that.

Most companies know that they need proactive security like threat modeling, risk assessments, security training. That’s a responsible thing to do, but it’s slow and it gets in the way of the speed to the market. We call this slow and safe development. It might be safe by the way of security compliance, but it opens up to competitive risk. This is what we call the Development Devil’s Choice. Every company that relies on it has two bad choices, fast and risky or slow and safe.

Interesting. Do you believe the situation will improve over time as companies get more experienced in dealing with this dilemma?

I think it’s going to get worse over time. There are more regulations coming. A couple of years ago GDPR came up, and then it’s California Consumer Privacy Act, and then the new PCI regulations.

The technology is also getting more complex every day. We have Dockers and Kubernetes, there’s cloud identity management and the shelf life of the technology is reducing. We no longer have the 10 years end of life Linux systems that we can rely on.

SD Elements

So, how are companies dealing with this problem in the age of agile development?

I’m tempted to say that rather than dealing with it, they’re struggling with it. Most agile teams define their work by the way of user stories. On rare occasions, the teams take the time to compile the requirements and bake for security, and bake it into their stories. But in the majority of the cases, the security requirements are unknown and implicit. This means that they rely on people’s good judgment, and they rely on expertise. This expertise is hard to find and we do have a skill shortage in the security space. When you find them, they’re also very expensive.

How do these teams integrate security compliance into their workflow?

In our experience, most agile teams have been relying on testing and scanning to find the issues, and then that means that they have a challenge. When they uncover the issue, they have to figure out if they should go back and fix or they take the risk and move forward. Either way, it’s a lot of patchwork. When the software gets shipped, everybody crosses their fingers and hopes that everything went well. This usually leads to a lot of silos. Security becomes oppositional to development.

What happens when the silos occur? Are teams wasting their effort? Reworking software?

It adds a lot of time and anxiety. The work ends up being manual, expensive and painfully deliberate. The security compliance side of the business gets frustrated with the development, they find inconsistencies against each other and it just becomes a challenge.

No matter how companies develop software, their steps for security and compliance are likely not very accurate. That means that the management also has no visibility into what’s going on. There are lots of tools and processes today to check on the software that is being built, but usually they don’t help make it secure from the start. They usually point out to the problems and they show how it was built wrong.

Finding that out is a challenge because it exacerbates this dilemma of development versus security. It’s like being told that you didn’t need heart surgery if you ate healthy food for the past 10 years. It’s a bit too late and not particularly helpful.

I’m hearing you describe a serious problem that’s haunting company leaders. It seems they have two pretty bad options for development, fast and risky or slow and safe. Is that it? Are companies doom to choose between these two?

Well, there’s hope. There is a third option emerging. You don’t need to be fast and risky or slow and safe. The option is to be nearly as fast, without slowing down and being secure at the same time. We call it the balanced development. It’s similar to how the Waze app knows where you’re driving and tells you specifically at each step where you should be going and where you should be turning.

The key is to bring security left in the cycle, circle rapid around the development and make sure that it’s done in tandem. Testing and scanning should not find anything by the end of the cycle if this is done right. These systems mostly leverage automation to balance the development effort between the fast and risky and the slow and safe.

SD Elements

Ehsan, can you tell us more about these systems? How do they work and how do they support the jobs of security teams?

Well, automation is the key. It starts by capturing the knowledge of the experts into a knowledge base, and automating so that the system understands what you’re working on, what you’re doing, and delivering the actions that you need to take to bake security in right at the time you need it.

It constantly also updates the knowledge base to stay on top of the regulation changes, technology changes, and during development the teams are advised of the latest changes. When the project is finished, the system is almost done with the security and compliance actions and activities, and all of it is also documented so that the management can see what risk they are taking on.

Thank you very much for the insight and for the thoughtful discussion. What advice would you give company leaders as they start to tackle these issues?

Well, I have a couple of advice, mostly based on the companies we have been working with. I would say, stay pragmatic and balanced. Focus on getting 80% fast and 80% secure. Don’t get bogged down. Number two, I would say educate your organization, especially the executives. Executive buy-in is very important. Without that you can’t change the process and you can’t do it in silos from within one small team. You have to get people’s buy-in and support.

The next one is investing in automating the balanced approach. This investment is sometimes hard, but the earlier you do it, the better. I see a lot of companies bugged down by investing in the smaller, easier projects like updating and refreshing their scanning practice. It usually pays off to go to the heart of the problem and invest in that, because all of your future investments are more optimized.

I find it also useful when working with the developers, to always start with why? Why are you doing this? Why are you asking them to follow a certain process? If they understand the business value of it, they’ll be more cooperative with you.

And finally, try our system. We have a platform called SD Elements that enables you to automate your balanced development.

If anyone’s listening and interested in connecting with you or Security Compass, how can they find you?

Well, you should check out our website at www.securitycompass.com. We’d love to prove our motto to you: Go fast and stay safe. Thanks for joining us.

Corporate cybersecurity concerns and spend continue to rise, but so do breaches

More than 50 percent of security and IT leaders agree that they are very concerned about the security of corporate endpoints given the prevalence of sophisticated attack vectors like ransomware, disruptionware, phishing and more, according to a survey from RSA Conference 2020 by Absolute.

corporate cybersecurity concerns

Cybersecurity spending on the rise

According to recent industry reports, 2019 saw a record number of more than 5,000 breaches as well “an unprecedented and unrelenting barrage of ransomware attacks” in the U.S. that impacted at least 966 businesses, government agencies, educational establishments and healthcare providers at a potential cost of more than $7.5 billion.

It’s no surprise, then, that global cybersecurity spending continues its steep incline – estimated to reach $174 billion by 2022 – as organizations work to thwart attackers and mitigate breaches with additional layers of security controls.

When it comes to how organizations are spending these dollars to protect sensitive data and devices, more than 80 percent of respondents reported the use of endpoint security tools and multi-factor authentication.

Prevention is essential

More than half of the respondents also relayed that prevention remains the core area of security focus and investment, even though a recent study shows that 60 percent of data breaches are the result of vulnerabilities that the enterprise already knew about but failed to address.

“The reality is that cybersecurity concerns and spend continue to rise, and yet so do breaches,” said Christy Wyatt, CEO of Absolute.

“To best prepare for the inevitable, companies need to look for capabilities that allow their endpoints to quickly heal and bounce back if they should become compromised or removed during an attack. The organizations that adopt a resilience-based strategy will be the ones who are able to respond, recover, and minimize the impact of a breach.”

Other key findings

  • Nearly three in four respondents were familiar with the concepts of ‘cyber resilience‘ and ‘endpoint resilience.’
  • A significant population – more than one in three – noted incident response, recovery, or resilience as the most important element of their organization’s strategy, while 55 percent said prevention was key.
  • More than three in four respondents reported their organizations are using endpoint security tools, multi-factor authentication, and employee training and education to protect data, devices, and users, while less than half noted the use of tools focused on tracking missing, lost, or stolen devices or ensuring vendor / partner security.

Security operations and the evolving landscape of threat intelligence

In this podcast recorded at RSA Conference 2020, we’re joined by the ThreatQuotient team talking about a threat-centric approach to security operations, the evolution of threat intelligence and the issues surrounding it.

threat intelligence perspective

Our guests are: Chris Jacob, VP of Threat Intelligence Engineering, Michel Huffaker, Director of Threat Intelligence at ThreatQuotient, and Ryan Trost, CTO at ThreatQuotient.

Here’s a transcript of the podcast for your convenience.

We are here today with the ThreatQuotient team to talk about all things security operations, the human element of cybersecurity, and the evolving landscape of threat intelligence. I am joined by Ryan Trost, Chris Jacob and Michel Huffaker. Will you all please introduce yourselves?

Ryan Trost, co-founder and CTO at ThreatQuotient. Ultimately kind of a SOC dweller for most of my career – from system administration, up to security analyst, up to incident response and then SOC manager. Most formally at General Dynamics.

Michel Huffaker, I’m the Director of Threat Intelligence at ThreatQuotient. I started my career in the air force and kind of moved up through government, eventually landing in the private sector at iSIGHT Partners for five years, and then ultimately came to ThreatQuotient.

I’m Chris Jacob. I’m the Vice President of Threat Intelligence Engineering. I’ve been on the cyber side of things for about the last five or six years, before that grew up more in the infosec side of the world, spending most of my time at Sourcefire.

The first question for today’s discussion is about customer challenges. I know at ThreatQuotient you hear a lot about, and this is a direct quote I believe, your “customers struggle with ingesting all the stuff”. Let’s dissect this a little bit. What is the stuff that these customers are referring to that they’re challenged by?

Ryan: From my experiences, threat intelligence teams that didn’t come through the military and didn’t have formal training, ultimately ended up being pack rats and basically getting their hands on anything and everything they could, which has its benefits but also has a lot of deep dark skeletons from a collection standpoint, how to sort through it.

And I think teams have to really set goals on “this is my objective, this is what I want to do, this is the data that I need to do it”. You start to really look at data from a “nice to have” versus a “must have”. And then as you meet those objectives, you can widen that net, as they say, versus just trying to boil the ocean, which gets teams in lots and lots of trouble.

Michel: Yeah, I agree. There are a lot of data hoarders. People just wanted to have as much information as they could, but it’s very difficult to operationalize that. I think it you still need as much information as you can get, but it needs to be the right information. I think that as the industry has matured over time, people are really starting to understand, you still have to deal with a lot of data, but you have the relevant data, you get the right data, and you can actually take action on that.

Chris: Unsurprisingly, I agree with both these guys. I think it’s not a bad thing to have all the data, as long as you can get to the data you need easily, as long as it’s not masked by, you know, it’s got to be the needle in the haystack and not which haystack do I even look in? So as long as you can get to the data quickly, having it all can be good in some instances because, depending on the tools that you’re using to operationalize the data, if you’re using SIEMs for instance, you can cast a much wider net. They handle big pieces of, or large amounts of data.

But if you’re dealing person to person, or you’re dealing with tools that are firewalls, things that have a lower threshold for the amount of data they can handle, you need to make sure that you’re sending the right data there and using that lens. It’s capture it all, but make sure you can bubble up to the top what’s really important to your organization.

So, all of these points remind me a lot of the highly debated “which came first, the chicken or the egg” discussion as it relates to threat intelligence. So, when it comes to security operations, which should a company be implementing first, the threat intelligence feeds or an actual platform? Or does that even matter?

Ryan: Optimally, both. However, teams have to have somewhat of a strategy and a roadmap to it. In previous lives we had the same build it or buy it. And you need to really create those milestones or justification to get the approval to buy certain things and certain tools. So, a lot of teams ultimately focused on “okay, let’s start with open source”. It’s, it’s free, it’s widely available, there’s so many open source feeds out there, and they’ll have to figure out where to put that stuff.

Early analysts were putting it just into a spreadsheet, so every analyst had their own spreadsheet and it got to the point where there’s benefit in that. However, you quickly reached the ceiling of value and you hopefully hit a couple milestones that you can really get traction on with the executives, and then escalate to buying something. In conclusion, it’s ultimately both, but it ultimately kind of depends on the team and the logistics, and so forth.

Chris: I think we focus so much on incoming information, and that being the purpose for having a platform. But I think we need to spend some more time talking about the delivery of it. That’s the reason that a platform like this is so important, isn’t just for the analyst to have a tool to store things in and to work in, but ultimately for them to deliver that product, that intel that they’ve refined and sort of polished up.

How do they get that to the security teams? That’s an important part of the platform that, I think, gets overlooked quite a bit. In my opinion, you have to start with a platform. Obviously, they’re intel feeds out there, whether they’re open source all the way up to very expensive types of feeds. But you have to have the infrastructure in place for the analyst to be able to work in number one, but also, again, ultimately be able to deliver that finished product to their customers, which would be the security teams.

Michel: I agree that bringing external information and intelligence in is important, but at the same time it’s often overlooked – the wealth of information you have internally. If you have the right tools, the right platforms to pull that kind of metadata out of your own security stack, that’s the best way to understand who’s actually coming after you, who are the people who’ve been there before.

If you, like Ryan was saying, if you don’t have the budget tolerance to do both, if you bring the platform in first, then you can at least see what’s happened in your organization in the past, and then kind of predict based on that. Then you kind of create your own feed at the same time that you bring the platform in.

Michel, I heard you say “knowing who’s coming after you”. On that note, attribution has always been a hot topic related to threat intelligence. To some of us, it’s more important to know the motivation behind an attack rather than know exactly who that attacker is. What, between three of you are your thoughts on this, and how does the theme of the human element tie into that topic of attribution?

Michel: Attribution matters to some people. There are some organizations that have the maturity to care, and I say that because in the end it doesn’t matter. If you’re head down and you’re looking at your organization, you’re trying to figure out who’s coming after you, that’s less important than what they’re after, what their motivations are.

There are some benefits to it, in the sense of an internal marketing effort. If you could put a scary face or a scary mascot on top of something as a threat intel team, it gives you the ability to communicate internally really well. You can say scary guy one, two, three is after us, and that means something to your C-suite.

But on the whole, there’s a huge level of effort for very little gain, in terms of just finding out who that is. From the human perspective, it’s easy for us in the industry to batch all these actions together under one adversary group. But I think it’s important to remember these are humans on the other side, right? It’s humans fighting humans in this weird cyberspace.

If you think about it in that sense, it gives you a little bit of a leg up understanding operational patterns and things like that. It’s important to remember that they’re actually people.

Ryan: I completely agree with Michel. I think adversaries are just human by nature, and humans are creatures of habit. A lot of the adversaries, they’ll become experts in one attack vector, maybe one or two, and they’ll stick with that because that’s benefited them and that’s what they know.

The more the defenders know about that person, that human element, and what they gravitate towards, it’s much easier to defend against. So, I think that it’s very important to know who it is. Maybe not the attribution, unless you’re prosecuting, in that capacity, it doesn’t really make any sense. But again, it’s helping you organize your defense and organize your tools and technologies, to stop the adversary left of boom.

Chris: I think to that point, who it is, doesn’t really matter. To be able to put a box around it, to be able to say: “This is the container I’m using to track the tactics and techniques that I see here”. That allows you to test your theories: “This looks familiar to me. I think it’s this adversary and let me deploy these countermeasures to defend.” And also, the test proved that this is in fact the same group, the same organization or this is someone different.

I think the vast majority of people in the commercial world aren’t directly facing named adversaries. That said, you shouldn’t minimize it. Again, it’s good to be able to group things together so that you can recognize the patterns and know how to protect your organization from specific types of threats.

Pulling on that thread a little bit more. When we actually talk about a security incident as it’s unfolding, who is responsible for coordinating actions within a company? Is this more of a human response or an automated response from technology? Is it both putting ThreatQ into the conversation at this point? Can you guys walk us through what that process might be like internally? How does a tool like ThreatQ Investigations play into this? Who is responsible for those security incidents as they’re happening?

Ryan: In my experience, it ranges drastically based on the team, the budget, the technologies involved, and so on and so forth. In two previous roles, largely the incident is triggered or the event is triggered from a SIEM correlation or some type of hunting expedition. The technology raises the red flags, as this is suspicious.

That’s ultimately going to trigger an analyst to really look at it and dive in information gathering, to see if their spidey sense is triggered, or potentially an automated playbook will gather that information, whether it’s snapshotting the host and running it through a couple of smoke tests, and so forth.

Ultimately, an analyst is going to see it and review the information to determine does this event or alert need to be escalated to an incident. Once that handoff is given, then the incident response team usually gets involved, and then that’s run through a team lead who ultimately runs it for the life cycle of the case, and so forth. But again, it ranges drastically whether your team is two, whether your team is 50, geographically spread out, it really unfortunately is all over the place.

threat intelligence perspective

Chris: The better question there to dig into is how this is all coordinated, right? Because there are multiple teams involved, and those teams don’t necessarily communicate well with each other. Having a platform that allows those teams to just perform their work but capture all that information so that all of them are singing off the same sheet of music.

If the SOC is going through SIEM matches and adding color, adding information, then the incident response team has that information at their fingertips through using a platform and having integrations. Because ultimately, it’s all about the context. Team A might have this piece of information that doesn’t mean anything to them, so they don’t think to share it with the team down the hall that’s working the same incident. But if the team down the hall had that little piece of information, it would change their view of the incident altogether.

It’s about really coordinating across the teams because, you talked about the human element, people don’t communicate with each other well. So if we can do it machine to machine, it works out a lot better. And then to get into investigations TQI, that is a chance for all those teams to come back together, after each one has worked their incidents separately. Let’s get together and build out the evidence map of how we’re going through the incident and uncover those little pieces that we may not see if we work in our own silos.

Ryan: And Chris is absolutely right, where you get multiple teams working together, and this is where IR tabletop exercises really are critical for a team success, because a lot of times the IR coordinating it, but they don’t have access to the financial databases. So, they need to go to the financial team, or they don’t have certain access to the apps, or certain things that require you to reach out to a completely different department that isn’t security focused and ask for help. And usually they’re completely open, especially when it’s wrapped around an incident. It’s essential.

Michel: And there’s a pacing element to that as well. All these teams work at different paces, right? If you think of the difference between emergency responder from a fire perspective, there’s the people that come in and put the fire out, and then there are the people that do the investigation to see what caused it. And those are two drastically different paces to address two drastically different problems that ultimately come together.

When you’re talking about who handles things, having a place where people can work at their own pace, but still benefit from each other’s work at the pace that’s necessary for their specific job function is critical. Because if you allow that investigation to go on too long from the threat intelligence perspective, you lose sight of the urgency where you can get the cooperation from the other business units. So, you need those people who can go out and tactically respond, and then those that come in overarching and do the in-depth investigation.

What I’m hearing you all talk about is really how security operations help internally orchestrate all of the technology, all of the people, and ultimately help an organization make better business decisions. So, changing gears a bit, let’s talk about another important piece of that, which is most security teams have to do some sort of reporting. How has this evolved over the years? Where is the process of reporting metrics to executive leadership today? And how important is the ability to generate metrics from threat intelligence tools that organizations are using?

Ryan: From my experience, reporting is a huge benefit to an organization or a tool when it’s done correctly. I think a decade ago, reporting was purely quantitative. How many alerts, how many incidents, how many investigations, how many vulnerabilities, so on and so forth, and that was it. And it only got to the director level, it never went up.

However, with more security in the focus and more “okay, why and what next?”, a lot of reporting has matured to the sense of you get the traditional quantitative stuff. But now it’s “okay, let’s break down those numbers of alerts” based on the attack vector or based on the adversary attribution. So, it’s a lot more of trending versus a point in time. And that’s making it up to the C-levels, if not board of director levels. And that’s huge.

And a lot of security teams, historically, again, it wasn’t a primary focus for them. I was running a government SOC, and literally we had two FTEs dedicated to reporting to the point where the reports were beautiful brochures. But that’s what the government wanted. They wanted that sexy eye candy and eye charts that were in the reports, the infographics and stuff like that. That’s what spoke to them.

I think a lot more teams need that little bolster and something that escalates in visibility, and really shows the larger organization “this is what I’ve done for you lately, this is how I’m helping, this is when I’m predicting”. And hopefully hit a couple of those milestones.

Chris: I think reports, in my mind, fall into two different buckets. You have on one side, the more human consumable where you’re writing about a trend, maybe you are tracking a specific adversary or TTPs. And those are more human consumable type reports. But the other side that I think could be very interesting is reporting on the efficacy of the tools.

It’s interesting to do a before and after report based on implementing a threat intelligence platform. “What effect am I having on the efficacy of my security tools? I had X amount of alerts before I started to apply this threat intelligence. Now do I have Y? Did it get better? Did it get worse?” That’s an interesting side of reporting that I don’t think people spend a lot of time thinking about.

Michel: Going back to what Ryan was saying a little bit, the curse of well-done security just like with well-done intelligence is that you don’t hear anything about it. If everything is effective, there’s nothing to say. It’s just all quiet, everything’s good. It’s expensive to implement a really well-done security operations team including threat intelligence.

For a lot of time there were C-suite that were questioning this huge investment without any sort of feedback and what was happening. And I think that view of security as a cost center has changed a lot with people actually being able to say: “Look at the loss that we prevented, had this incident occurred within our network. It didn’t, because we have these platforms, we have this intelligence in play, but look what it would have done. Look what we saved you.”

I think changing it from a cost center to a loss prevention perspective has really helped. And that’s all built around qualitative metrics of how effective is your threat intelligence program, how effective are your tools, and how well is everything operationalized and working together.

Thank you all so much for the discussion today. Before we wrap up, is there anything else that you would like to add or share with the listeners?

Chris: If you’re interested in learning more, we’ve actually broken down different use cases for different teams, and have that all written up on our website. Whether you’re live in the SOC, whether you’re an incident response person, check out the different use cases, different write-ups, and the different videos that we have for each of those personas.

Exploring the impact that hybrid cloud is having on enterprise security and IT teams

While enterprises rapidly transition to the public cloud, complexity is increasing, but visibility and team sizes are decreasing while security budgets remain flat to pose a significant obstacle to preventing data breaches, according to FireMon’s 2020 State of Hybrid Cloud Security Report.

impact hybrid cloud

“As companies around the world undergo digital transformations and migrate to the cloud, they need better visibility to reduce network complexity and strengthen security postures,” said Tim Woods, VP of Technology Alliances for FireMon. “It is shocking to see the lack of automation being used across the cloud security landscape, especially in light of the escalating risk around misconfigurations as enterprises cut security resources. The new State of Hybrid Cloud Security Report shows that enterprises are most concerned about these challenges, and we know that adaptive and automated security tools would be a welcomed solution for their needs.”

Security challenges

While enterprises increasingly transition to public and hybrid cloud environments, their network complexity continues to grow and create security risks. Meanwhile, they are losing the visibility needed to protect their cloud systems, which was the biggest concern cited by 18 percent of C-suite respondents, who now also require more vendors and enforcement points for effective security.

The 2020 FireMon State of Hybrid Cloud Security Report found that:

  • Business acceleration outpaces effective security implementations.
  • Nearly 60 percent believed their cloud deployments had surpassed their ability to secure the networks in a timely manner. This number was virtually unchanged from 2019, showing no improvement against a key industry progress indicator.
  • The number of vendors and enforcement points needed to secure cloud networks are also increasing; 78.2 percent of respondents are using two or more enforcement points. This number increased substantially from the 59 percent using more than two enforcement points last year. Meanwhile, almost half are using two or more public cloud platforms, which further increases complexity and decreases visibility.

Shrinking budgets

Despite increasing cyberthreats and ongoing data breaches, respondents also reported a substantial reduction in their security budgets and teams from 2019. These shrinking resources are creating gaps in public cloud and hybrid infrastructure security.

Budget reductions increase risk: There was a 20.7 percent increase in the number of enterprises spending less than 25 percent on cloud security from 2019; 78.2 percent spend less than 25 percent on cloud security (vs. 57.5 percent in 2019). Meanwhile, 44.8 percent of this group spent less than 10 percent of their total security budget on the cloud.

Security teams are understaffed and overworked: While the cyberattack surface and potential for data breaches continues to expand in the cloud, many organisations trimmed the size of their security teams – 69.5 percent had less than 10-person security teams (compare to 52 percent in 2019). The number of 5-person security teams also nearly doubled with 45.2 percent having this smaller team size versus 28.5 percent in 2019.

impact hybrid cloud

Lack of automation and third-party integration fuels misconfigurations

While cloud misconfigurations due to human-introduced errors remain the top vulnerability for data breaches, an alarming 65.4 percent of respondents are still using manual processes to manage their hybrid cloud environments. Other key automation findings included:

Misconfigurations are biggest security threat: Almost a third of respondents said that misconfigurations and human-introduced errors are the biggest threat to their hybrid cloud environment. However, 73.5 percent of this group are still using manual processes to manage the security of their hybrid environments.

Better third-party security tools integration needed: The lack of automation and integration across disparate tools is also making it harder for resource-strapped security teams to secure hybrid environments. As such, 24.5 percent of respondents said that not having a “centralised or global view of information from their security tools” was their biggest challenge to managing multiple network security tools across their hybrid cloud.

By harnessing automated network security tools, robust API structures and public cloud integrations, enterprise can gain real-time control across all environments to minimise challenges created by manual processes, increasing complexity and reduced visibility. Automation is also the antidote to shrinking security budgets and teams by enabling organisations to maximise resources and personnel for their most strategic uses.

RSA Conference 2020: 36,000 attendees, 704 speakers and 658 exhibitors

RSA Conference concluded its 29th annual event in San Francisco last Friday. More than 36,000 attendees, 704 speakers and 658 exhibitors gathered at the Moscone Center last week to explore the Human Element in cybersecurity through hundreds of keynote presentations, track sessions, tutorials, seminars and special events.

OPIS

Some of the most pressing topics included privacy, machine learning and artificial intelligence, policy and government, applied crypto and blockchain, and, new for RSA Conference 2020, open source tools, product security and anti-fraud.

“Our mission is to connect cybersecurity professionals with diverse perspectives and backgrounds to inspire new ways of thinking and push the industry forward,” said Linda Gray Martin, Senior Director and General Manager, RSA Conference. “This week proved the importance and impact of the human element in cybersecurity, and we thank all of our attendees for bringing their passion, commitment and ideas to RSA Conference for another amazing year.”

RSA Conference 2020 highlights

  • 29 keynote presentations on two stages. West Stage keynotes featured sponsor keynotes, panels and esteemed guest speakers, and South Stage keynotes brought highly coveted sessions from industry experts to a broader audience.
  • 704 speakers across 520 sessions and 658 companies on the expo floors.
  • SECURITI.ai was named RSA Conference 2020’s “Most Innovative Startup” by a panel of expert judges during the fifteenth annual RSAC Innovation Sandbox Contest.
  • Three early stage cybersecurity startups, Dasera, Inc., Soluble and Zero Networks, pitched their ideas to a panel of VCs and walked away with invaluable feedback in the second annual RSAC Launch Pad.
  • Professor Joan Daemen and Professor Vincent Rijmen, two world-renowned cryptographers, received the annual RSAC Excellence in the Field of Mathematics Award.
  • Over 130 CISOs participated in the second annual CISO Boot Camp, a one-and-a-half day program designed to spark open and frank conversations between top cybersecurity leaders in a closed-door environment.
  • RSAC College Day welcomed 650 college students, recent grads and faculty to network with leading companies, explore career opportunities, attend dedicated education events and experience RSA Conference sessions and the expo floor.

“Cybersecurity and privacy are defining issues of our time,” said Dr. Hugh Thompson, Program Committee Chair at RSA Conference. “Some of the new threats we face can only be addressed through collaboration and innovation; I believe that this year’s event has helped to catalyze both. I’m looking forward to seeing what we can continue to accomplish together as a community and to gathering together again at RSA Conference 2021.”

For the most important news, product releases and podcasts from RSA Conference 2020, check out our microsite.

SECURITI.ai named Most Innovative Startup at RSA Conference 2020

SECURITI.ai was selected winner of the fifteenth-annual RSA Conference Innovation Sandbox Contest and named “Most Innovative Startup” by a panel of leading venture capitalists, entrepreneurs and industry veterans.

SECURITI.ai

SECURITI.ai is a leader in AI-powered PrivacyOps. Its PRIVACI.ai solution automates privacy compliance with patent-pending People Data Graphs and robotic automation. It enables enterprises to give rights to people on their data, comply with global privacy regulations and build trust with customers.

“We are honored to join such an impressive roster of past recipients,” said Rehan Jalil, CEO of SECURITI.ai. “Privacy is a basic human right and companies want to honor individual rights of privacy and data protection. Privacy compliance and operations are only getting more complex for businesses around the world, and we’re humbled that the judges recognized our vision for AI-powered PrivacyOps and data protection.”

In its fifteenth year, the RSAC Innovation Sandbox Contest is a leading platform for startups to showcase their groundbreaking technologies that have the potential to transform the cybersecurity industry. Since its inception, the RSAC Innovation Sandbox Contest’s top 10 finalists have collectively seen 56 acquisitions and received over $6.2 billion in investments.

This year’s finalists were: AppOmni, Blu Bracket, Elevate Security, ForAllSecure, INKY Technology, Obsidian Security, SECURITI.ai, Sqreen, Tala Security, and Vulcan Cyber.

Past winners include companies such as Imperva, BigID, Phantom, and most recently, Axonius.

“The cybersecurity industry faces new threats, changes and challenges every day, which is why we’ve committed more than a decade to encouraging and rewarding innovation in the space through the RSAC Innovation Sandbox Contest,” said Linda Gray Martin, Senior Director and General Manager, RSA Conference.

“The finalists on stage, regardless of the competition’s outcome, will undoubtedly make a lasting impact on the industry. SECURITI.ai, in particular, demonstrated a unique vision that addresses one of the biggest challenges that businesses face today, and we look forward to witnessing the company achieve great things for years to come.”

Photos: RSA Conference 2020, part 5

RSA Conference 2020 is underway at the Moscone Center in San Francisco. Check out our microsite for the conference for all the most important news.

Here are a few photos from the event, featured vendors include: MobileIron, CodeScan, BlockChain Security, DigiCert, LogRhythm.

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

photo gallery RSA Conference 2020

Other photos from the conference are available (1, 2, 3, 4).

Hacking has become a viable career, according to HackerOne

HackerOne announced findings from the 2020 Hacker Report, which reveals that the concept of hacking as a viable career has become a reality, with 18% describing themselves as full-time hackers, searching for vulnerabilities and making the internet safer for everyone. Not only are more hackers spending a higher percentage of their time hacking, they’re also earning a living doing it.

hacking career

The annual report is a study of the bug bounty and vulnerability disclosure ecosystem, detailing the efforts and motivations of 3,150 hackers from over 120 countries who successfully reported one or more valid security vulnerabilities on HackerOne.

“Hackers are a global force for good, working together to secure our interconnected society,” said Luke Tucker, Senior Director of the Global Hacker Community. “The community welcomes all who enjoy the intellectual challenge to creatively overcome limitations. Their reasons for hacking may vary, but the results are consistently impressing the growing ranks of organizations embracing hackers through crowdsourced security — leaving us all a lot safer than before.”

Key findings include:

  • Global growth of bug bounty programs is being followed by the globalization of the hacker community. Hackers from Switzerland and Austria earned over 950% more than in the previous year, and hackers from Singapore, China, and other countries in APAC earned over 250% more than in 2018.
  • The hacker community continues to grow at a robust pace, nearly doubling in the past year to more than 600,000 registered.
  • Hundreds of hackers are registering to join the ranks every day — nearly 850 on average — working to secure the technologies of more than 1,700 global customer programs.
  • Hacking also provides valuable professional experience, with 78% of hackers using their hacking experience to help them find or better compete for a career opportunity.
  • Hacking is becoming a popular income supplement or career choice. Nearly 40% of hackers devote 20 hours or more per week to their search for vulnerabilities. And 18% of our survey respondents describe themselves as full-time hackers.
  • Most of the polled hackers are are self-taught, underscoring the importance of community and online resources.
  • Hackers earned approximately $40 million in bounties in 2019 alone, which is nearly equal to the bounty totals for all preceding years combined. At the end of this past year, hackers had cumulatively earned more than $82 million for valid vulnerability reports.
  • In addition to the seven hackers who have passed the $1 million earnings milestone, thirteen more hit $500,000 in lifetime earnings.
  • Hackers in the U.S. earned 19% of all bounties last year, with India (10%), Russia (8%), China (7%), Germany (5%), and Canada (4%) rounding out the top 6 highest-earning countries.
  • Most of the polled hackers prefer to hack websites (71%), the rest go for APIs, iOS and Android mobile apps, and other software.
  • The polled hackers found the Burp Suite to be the most useful tool when hacking (89%), followed by tools they built (39%), fuzzers (32%), and web proxies/scanners (25%).

hacking career

“No industry or profession has experienced an evolution quite like hacking,” explained Tucker. “It started in the darkest underbelly of the internet, where hackers roamed the online world in search of vulnerabilities. It later grew into a respectable hobby, something that talented people could do on the side. Now it’s a professional calling: hackers, pentesters, and security researchers are trusted and respected, and they provide a valuable service for us all.”

This tectonic shift is happening at every corner of the globe. Hackers today are living in countries like Panama, New Zealand, Hungary, Senegal, Cuba, Vietnam, and Venezuela, working to make the internet safer for everyone. As hacker-powered security programs become ubiquitous, it’s easy for hackers to find new and potentially lucrative opportunities from anywhere — all they need is an internet connection. This is, in part, due to the global growth of hacker-powered security programs.

Federal Governments led the pack across the globe in 2019 with the strongest year-over-year industry growth at 214%, and last year saw the first launch of programs at the municipal level, according to the 2019 Hacker-Powered Security Report. In 2019 alone, HackerOne launched 22 programs and 36 altogether since 2016 with governments in North America, Asia and Europe. Every minute of every day, hackers and companies across the globe come together to make the internet safer for everyone.

What is plaguing public sector cyber readiness?

IT complexity, insider threats, and an abundance of privileged users plague public sector cyber readiness, a SolarWinds report has revealed, based on the answers from 400 IT operations and security decisionmakers, including 200 federal, 100 state and local, and 100 education respondents.

Careless and untrained insiders the leading source of security threats

For the fifth year in a row, careless and untrained insiders are the leading source of security threats for public sector organizations.

  • Fifty-two percent of total respondents cited insiders as the top threat; this number is consistent for both federal and state and local respondents.
  • In the education sector, respondents pointed to the general hacking community (54%) as the top threat.

Budget constraints as top obstacle

Budget constraints, followed by complexity, top the list of significant obstacles to maintaining or improving organizational IT security.

  • Education respondents indicated more so than other public sector groups that budget constraints (44% in K-12) are obstacles to maintaining or improving IT security. State and local respondents indicated 27%, followed by federal respondents at 24%.
  • Federal respondents indicated complexity of the internal environment (21%) is one of the most significant obstacles, surpassed only by budget constraints (24%).
  • While budget constraints have declined since 2014 for the federal audience (40% in 2014; 24% in 2019), respondents also recognized the complexity of the internal environment as an obstacle that has increased (14% in 2014; 21% in 2019).

public sector cyber readiness

Cybersecurity maturity needs attention

Cybersecurity maturity needs attention across public sector organizations; on average, respondents rated their agency’s maturity at a 3.5 on a scale of one to five.

  • Respondents indicated that their capabilities are most mature in the following areas: endpoint protection (57%), continuity of operations (57%), and identity and access management (56%). However, there was not a single cybersecurity capability for which more than 57% of respondents claimed to be organizationally mature.

Public sector lacks confidence in tackling evolving threats

Less than half of public sector respondents are very confident in their team’s ability to keep up with evolving threats, regardless of whether the organization outsources its security operations or not.

  • Forty-seven percent of respondents who outsource at least part of their security operations to a managed service provider (MSP) (28% of total respondents), feel very confident in this ability.
  • The vast majority of respondents (86%) rely on in-house staff as their primary security team. Only 41% of this pool feel very confident in their team’s ability to maintain the right skills.

public sector cyber readiness

Evaluating metrics to measure IT security team success

Most public sector organizations measure the success of their IT security teams by evaluating metrics such as the number of detected incidents (58%) or their team’s ability to meet compliance goals (53%), which, as standalone metrics, may not accurately reflect an agency’s risk profile or the IT team’s success.

  • State and local respondents were also likely to consider the number of threats that were averted (56%), while education respondents focused on level of device preparedness (46%).
  • Seventy-five percent of respondents indicated compliance mandates or regulations such as GDPR, HIPAA, FISMA, RMF, DISA STIGs, etc., have had a significant or moderate impact on the evolution of their organizations’ IT security policies and practices.

Public sector orgs struggling to segment users by risk level

Public sector organizations struggle to segment users by risk level and manage the security threats posed by both privileged and non-privileged users.

  • Sixty-one percent of respondents formally segment users by risk level; however, the segmentation process is challenging because of the growing number of systems users need access to (48%), the increased number of devices (45%) and the growing number of users (43%).
  • Forty-one percent of respondents claimed to have privileged users not in IT. Privileged users have admin-level access to IT systems, and the extension of too much privilege across an organization can lead to increased risk.
  • Nearly one-third of respondents (30%) have a formal zero-trust strategy in place; another 32% are modeling their approach based on zero trust but don’t have a formal strategy.

“These results clearly demonstrate the degree to which most public sector organizations are struggling to manage cyber risk,” said Tim Brown, vice president of security for SolarWinds.

“While it’s heartening to see that almost two-thirds of respondents are formally segmenting users—a helpful step in managing risk—the data finds careless and untrained users to still be the weakest link.

“Additionally, we’re seeing a widespread lack of organizational maturity—even in technologies like endpoint protection that have been around forever. It’s therefore no surprise that only four in ten respondents feel very confident their security team can keep up with the evolving threats.”