SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals.
Growing deployment of next-gen tools and capabilities
The report’s findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or acquire some form of automation tool within the next 12 months.
These findings indicate that as SOCs continue to mature, they will deploy next-gen tools and capabilities at an unprecedented rate to address gaps in security.
“The odds are stacked against today’s SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year’s report,” said Stephan Jou, CTO Interset at Micro Focus.
“We’re observing more and more enterprises discovering that AI and ML can be remarkably effective and augment advanced threat detection and response capabilities, thereby accelerating the ability of SecOps teams to better protect the enterprise.”
Organizations relying on the MITRE ATT&K framework
As the volume of threats rise, the report finds that 90 percent of organizations are relying on the MITRE ATT&K framework as a tool for understanding attack techniques, and that the most common reason for relying on the knowledge base of adversary tactics is for detecting advanced threats.
Further, the scale of technology needed to secure today’s digital assets means SOC teams are relying more heavily on tools to effectively do their jobs.
With so many responsibilities, the report found that SecOps teams are using numerous tools to help secure critical information, with organizations widely using 11 common types of security operations tools and with each tool expected to exceed 80% adoption in 2021.
- COVID-19: During the pandemic, security operations teams have faced many challenges. The biggest has been the increased volume of cyberthreats and security incidents (45 percent globally), followed by higher risks due to workforce usage of unmanaged devices (40 percent globally).
- Most severe SOC challenges: Approximately 1 in 3 respondents cite the two most severe challenges for the SOC team as prioritizing security incidents and monitoring security across a growing attack surface.
- Cloud journeys: Over 96 percent of organizations use the cloud for IT security operations, and on average nearly two-thirds of their IT security operations software and services are already deployed in the cloud.
Last week, a “wormable” remote code execution flaw in the Windows DNS Server service (CVE-2020-1350) temporarily overshadowed all the other flaws patched by Microsoft on July 2020 Patch Tuesday, but CVE-2020-1147, a RCE affecting Microsoft SharePoint, was also singled out as critical and requiring a speedy fix.
Implementing the offered security updates has since become even more urgent, as more exploitation details and a PoC have been released on Monday.
CVE-2020-1147 is found in two .NET components (DataSet and DataTable) used to manage data sets, and affects Microsoft SharePoint, .NET Framework, and Visual Studio.
The vulnerability is triggered when the software fails to check the source markup of XML file input.
“An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the process responsible for deserialization of the XML content. To exploit this vulnerability, an attacker could upload a specially crafted document to a server utilizing an affected product to process content,” Microsoft explained, and provided security updates for:
- .NET Core
- .NET Framework
- SharePoint Enterprise Server (2013 and 2016)
- SharePoint Server (2010 and 2019)
- Visual Studio (2017 and 2019).
“Full protection requires the installation of the .NET Framework update as well as updates for any additional affected products mentioned in this article,” the company stressed.
The vulnerability was reported by Oleksandr Mirosh from Micro Focus Fortify, Jonathan Birch of Microsoft Office Security Team, and Markus Wulftange of Code White GmbH.
Information security specialist and prolific bug hunter Steven Seeley decided to probe how the vulnerability might be exploited and recently shared how it can be leveraged against a SharePoint Server instance to achieve RCE as a low privileged user. He also provided a PoC.
“Microsoft rate this bug with an exploitability index rating of 1 and we agree, meaning you should patch this immediately if you haven’t. It is highly likley that this gadget chain can be used against several applications built with .net so even if you don’t have a SharePoint Server installed, you are still impacted by this bug,” he noted.
The call for immediate patching has been echoed by other security researchers:
I’d argue that CVE-2020-1147 is the issue you should be scrambling to fix, not the “wormable” DNS thing. Empirically deserialization RCEs are way more likely to see malicious exploitation compared to memory corruption bugs that weren’t exploited in the wild prior to patch. https://t.co/7DyaRJMkIM
— Ben Hawkes (@benhawkes) July 20, 2020
— Kevin Beaumont (@GossiTheDog) July 20, 2020
Vulnerabilities in Microsoft SharePoint, a web-based collaborative platform that integrates with Microsoft Office and usually houses a lot of sensitive data, have lately been an attractive target for hackers.
Software-related issues continue to plague organizations of all sizes, so IT leaders are turning to application security testing tools for help. Since there are many types of programs available on the market, choosing one is not a straightforward process.
To select the perfect application security testing solution for your business, you need to think about an array of details. We’ve talked to several industry professionals to get insight to help you get started.
Leon Juranic, CTO, DefenseCode
Choosing the right application security testing solution for your business can be a daunting task for any organization. On the surface, they all appear to function similarly and provide a list of vulnerabilities as part of the results.
Prospective users need to look beyond the superficial and closely examine a couple of important factors and capabilities of any application security testing solutions. Clients should focus on True Positive and False Positive (low noise levels) rates to determine how usable a vendor’s product is in the real world.
Having to spend hours triaging the results to determine if they are real is an expensive overhead for any business and undermines confidence in the results also increases the workload of development teams unnecessarily, ultimately even rejection of an AST product.
Secondly, understanding if your workflow can be supported is essential, otherwise, a standalone security product will never be used effectively by development teams. The best approach would be to invest upfront and evaluate a shortlist of vendors to determine if they are a good fit for your business.
Ferruh Mavituna, CEO, Invicti Security
The most important thing is getting real value from your solution in a short time. The goal of application security testing is to get measurable security improvements, not just find issues.
There is no point spending money on a solution that will take months to deploy and get the first results. When selecting your application security solution, time to value in the real world should be your #1 consideration.
Every organization is different, so for web application security, the only approach that works for all sorts of environments is dynamic application security testing. DAST tools scan web applications and APIs by finding vulnerabilities regardless of programming languages, frameworks, libraries, and so on, so it’s much easier to deploy. It doesn’t require the application to be in an active development pipeline and you don’t need to install anything on the server.
To get value from your DAST product, you need results that directly lead to security improvements. This requires accuracy, so the scanner finds all the vulnerabilities that you really have, but also confidence in your results, so you don’t waste time on false alarms. You get a list of real, actionable vulnerabilities and you can start fixing them. Then you can see real value from your investment in days, not months.
James Rabon, Director of Product Management, Micro Focus
During the software development lifecycle, there are several approaches that should be followed in order to maintain the speed needed to keep up with releases today. These approaches, which are crucial for any application security testing tool are testing early, often and fast.
SAST identifies the root causes of security issues and helps remediate the underlying security flaws. An effective SAST tool identifies and eliminates vulnerabilities in source, binary, or byte code, allows you to review scan results in real-time with access to recommendations, line-of-code navigation to find vulnerabilities faster and enable collaborative auditing and is fully integrated with the popular Integrated Developer Environments.
DAST simulates attacks on a running web application. By integrating DAST tools into development, quality assurance and production, it can offer a continuous holistic view. A successful DAST tool offers an effective solution by quickly identifying risk in existing applications, automating dynamic application security testing of any technology, from development through production, validating vulnerabilities in running applications, prioritizing the most critical issues for root-cause analysis and streamlining the process of remediating vulnerabilities
Successful tools should be flexible to modern deployment by being available both on-premise and as a service.
Richard Rogerson, Managing Partner, Packetlabs
Application security testing solutions can be delivered in various ways including as a tool/technology or as a professional service. Automation alone is often not enough because it misses critical areas of applications including business logic, authorization, identity management and several others. This is why professional services are the most comprehensive approach.
- Qualifications: Successful consulting engagements have long relied on experience, but it’s difficult to assess experience before selecting a solution which is why certifications are often the best method to ensure a baseline level of knowledge or practical experience. Certifications to ask for include: GWAPT, GXPN, GPEN, OSWE, OSCE, OSCP.
- Methodology: Having a methodical approach to assessing applications is important as it plays heavily into the consistency and thoroughness of the assessment. There are several open-source and industry-standard testing methodologies including the OWASP Testing Methodology, NIST, PTES, ISSAF and OSSTMM. It is also important to review a checklist of all potential vulnerabilities that your application will be tested for and for this – transparency is key.
- Technology: Technology is important in reducing effort requirements and maximizing code coverage. Technologies include DAST, SAST, and IAST. DAST or dynamic Application security testing is the most common. It evaluates your applications while they’re running over the HTTP protocol. SAST or static application security testing evaluates applications at the line-of-code level. IAST or Interactive application security testing is an evolving technology that combines both approaches. Tools used must include both automated and manual testing capabilities to help the consultant evaluate vulnerabilities directly from the HTTP request or line of code.
- Reporting: The deliverable of an assessment is a report. When evaluating solutions, it is worthwhile to review sample reports and ensure they meet your requirements and offer sufficient information to understand the discovered findings, and more importantly how to fix them.
Dr. Thomas P. Scanlon, Data Science Technical Manager, CERT Division, Software Engineering Institute, Carnegie Mellon University
There is no universal, best tool for application security testing (AST). The most appropriate tool for one business environment may not be as suitable for another. When selecting an AST solution for a business, four of the most pertinent factors are budget, technology stack, source code availability, and use of open-source components.
- Budget – There are many quality open-source AST tools available for little or no cost. Commercial tools typically have more features and capabilities, so they are worth the investment if they fit the budget. A wise approach is to use an open-source tool first to gain domain experience, then shop and compare commercial tools.
- Technology stack – Large commercial AST tools support multiple programming languages, which may save costs when a business uses many technologies. Some smaller AST tools support only one or two languages but provide much deeper coverage, often best if you only need to support those languages.
- Source code availability – If the applications are developed in-house or the developer provides application source code, testing should use static code analysis tools. Without source code, testing should use dynamic analysis tools.
- Use of open-source components – If the application was developed with many third-party, open-source components, a software composition analysis (SCA) tool is a must. SCA tools detect the versions of all such components in use and list all their known vulnerabilities and, often, mitigations.
Susan St. Clair, Senior Cybersecurity Strategist, Checkmarx
Applications are what drive the vast majority of organizations today, so keeping them secure really means keeping your broader business and customers secure. However, before diving head-first into adopting a new AST solution, it’s important to look at what you already have in place.
Do you have established AppSec security policies or a standard that you’d like to adopt? Do you have an established CI/CD process? Are you already using SAST and looking to add more advanced tools like IAST and SCA into the mix? How closely do your AppSec, DevOps, and development teams work together? What are your developers hoping to get out of an AST tool? How about your AppSec team? Having a solid understanding of where you stand in your AST journey is just as important as the solution(s) you use.
At a minimum, ensure that the tools you choose:
- Work with DevOps to automatically trigger security scans and reduce remediation cycles
- Seamlessly integrate into your DevSecOps and CI/CD pipelines
- Are compatible with the framework and databases you’re already working with
- Offer a one-stop shop model so you can get SAST, IAST, SCA, etc. all in one place without needing to mix-and-match across vendors, ultimately reducing TCO
Making AST a priority can set your organization apart, not only in your ability to build better, more secure applications and code, but also by letting your customers know that you place the utmost importance on delivering an end product they can feel confident in using.
Micro Focus File Analysis Suite: Helping IT admins identify, manage and secure sensitive information
Micro Focus announced the release of Micro Focus File Analysis Suite, offering IT administrators a comprehensive data lifecycle management solution for identifying, managing and securing sensitive information across the enterprise.
As organizations implement protocols to meet national and international data regulations, the Micro Focus File Analysis Suite lowers the total cost of compliance, reduces risk and provides analytical insight and value across high-value and sensitive data assets.
“With Micro Focus File Analysis Suite customers no longer need to choose between traditional platforms that are limited to only offering storage optimization or data access and governance,” said Rick Carlson, Vice President, Information Management & Governance Solutions at Micro Focus.
“Featuring tools that combine detailed information analysis and insight with extensive risk assessment tools, our unified solution lowers the total cost of compliance, reduces risk and provides analytical insight and value that creates competitive advantage.”
Organizations employing Micro Focus File Analysis can automatically take action on sensitive data content, current and legacy text files, and can be assured of complete content analysis, indexing, and reporting of file information and metadata in context.
By providing identity and access governance, complete data visibility mapping, and actionable analytics to optimize server efficiency and data quality, Micro Focus File Analysis Suite ensures data lifecycle management and data access governance for a unified approach to mitigating the risk associated with managing sensitive data.
New Data Discovery features and capabilities within the File Analysis Suite include:
- Governance and compliance: Deep data discovery, data classification, audit trails, and analysis to evaluate, detect and govern sensitive/high-value information and optimization associated IT systems and infrastructure.
- Sensitive data analytics: Automated tagging, and metadata enrichment with pre-built sensitive data analytics and pattern matching in support of GDPR, CCPA, PIPEDA, POPI, KVKK, as well as, PCI, and PHI.
- Risk mitigation: Develop custom policies and controls to monitor, remediate and proactively manage identities and data access across critical data repositories.
- Research workspaces: Create and collaborate around sensitive data within Workspaces to conduct deeper analysis, identify additional data sets and review of sensitive data.
- Data subject analysis: Leverage pre-built templates designed to assist in reporting and responding to consumer and Data Subject Access Requests.
- Data management: Remediation actions to be applied to selected files without the need for moving or copying the data from its source (e.g., collect, hold, delete).
- Search and eDiscovery: Analyze, tag and manage high-value assets (e.g. contracts, intellectual property, patents, etc.) and sensitive personal data (e.g. PI/ PII, PCI, PHI, etc.) across unstructured data including files, email, business records, structured data, images and rich media.
Organizations that are able to gain a deeper understanding of where sensitive and high-value data reside are better equipped to protect and secure that data to remain in compliance. Siloed tools cannot address the data governance and security challenges many organizations must overcome.
As a complete end-to-end data lifecycle management solution, File Analysis Suite is just one of the ways Micro Focus helps customers bridge existing and emerging technologies in the race for digital transformation.
Assuming things is bad for your security posture. You are leaving yourself vulnerable when you assume what you have is what you need, or what you have is working as advertised. You assume you are protected, but are you really?
Don’t just trust – verify
What am I trying to get at? The new zero trust security model is promising as it looks to include many aspects of the security ecosystem. The underlying intent of the model is to remove the assumption by adding constant verification.
When I talk to customers about zero trust, I see a couple of things that are going to prevent them from a successful implementation.
To be successful, you need to realize three things. First is that zero trust is a philosophy. Second, zero trust is a process. And finally, there isn’t a product, per se, that implements zero trust into your environment. Access or advanced authentication methods do not make zero trust, and neither does the biggest and baddest firewall. Remember, zero trust is about always asking if this activity/action is “appropriate.” To determine that level of appropriateness, you need to consider the risk, the activity, and the identity and the associated attributes to determine access and authentication.
There are three ways a zero trust initiative can benefit an organization:
Agility: zero trust can enable you to run and change your business as necessary – economically, efficiently and effectively.
Security: zero trust can help you identify, protect, detect, and respond to threats and vulnerabilities present in the ecosystem.
Visibility: zero trust enables you to manage, optimize and innovate the value chain, meaning you can see (single pane of glass) what you need to manage.
These elements are a result of the scope zero trust should cover. The underlying pieces that zero trust covers (touches) ultimately enhance an organization’s agility, security and visibility.
A properly implemented zero trust infrastructure builds identity into the foundation because identity contains the relationships and authorization attributes needed to validate activities. It also considers applications because applications execute business and day-to-day processes.
This leads me to data. As you know, data is the end game, and it’s where the digital materials and outputs of applications and processes end. And lastly, the infrastructure is the bridge between the physical and the digital, consistently validating activities is critical to close and limit vulnerabilities. Without all of these connected and working well, a business will struggle to operate let alone adapt to the needs of today’s franticly changing security world. It’s the reason WHY you shouldn’t assume anything.
Taking an integrative approach
Where organizations typically fall down is in trying to bridge the old and the new seamlessly – “big bang” approaches are incredibly high risk, and thus a poor investment choice. This is where I see the most significant number of assumptions.
So, many organizations assume that a little duct tape can go a long way, and without a common “language” for IT components to talk, they become severe constraints. This has become more important now that the value of integrative functions such as elements like Single Sign-On (agility), behavioral analytics (security) and data mining (analytics) have become so relevant today.
So without getting into specifics around product level stuff, you need to consider a couple of things when evaluating a zero trust initiative. The first thing is this, there isn’t a single product that solves this, as it takes a cohesive approach. And second, please don’t assume what you have is all you need, I’ll leave you with this to consider when thinking of your current security posture – when assuming what got you to here, will get you there, I hate to tell you, but it won’t.
Zero trust is a comprehensive security framework that requires everyone—and every service account—to authenticate identity before entering the corporate network. Every app and every device, as well as all the data they contain, must also be verified for each session.
Considering the multitude of people, devices, and apps it takes to make today’s businesses hum, you might think zero trust requires extensive management.
And you would be right. But what makes this Herculean undertaking not only possible, but easy to manage is the next evolution which I like to refer to as adaptive trust.
Making sense of big data
Organizations have been collecting data for years, many collect so much that they don’t know what to do with it. Analyzing behavioral data through the lens of artificial intelligence enables companies to put it to good use.
Adaptive trust begins by collecting data across the enterprise about user activities – who does what and when, and which apps and data they use to accomplish their tasks. Then algorithms are trained on the information to discern typical patterns, creating alerts when an activity is outside of what has been established as a normal baseline.
For example, data patterns may show that an employee uses their laptop in Chicago during business hours. But one day they log in from Kyiv at 1 a.m. Noticing the anomaly, the adaptive system follows a pre-set company rule, requiring the employee to do a facial recognition scan. It turns out the employee is indeed in Kyiv, at a business meeting in Kyiv, so they meet the criteria and they continue to work without further disruption.
Other companies may have different pre-set rules, perhaps requesting verification of the user’s status from their manager or alerting the security team and shutting off access until the situation is sorted out. The point is, the adaptive trust system recognizes anomalies and takes action in accordance with company policy—with little or no human intervention involved.
Harnessing machine power
Automation provides a critical advantage in today’s fast-moving IT world, where companies struggle to find workers with the skills they need. Eighty-one percent of North American IT departments are experiencing a skills gap, a study by IT company Global Knowledge found. And every year, the gap gets wider.
As threats grow more sophisticated and cloud-based apps expand the surface of attack—often offering scant protection—the demand for cybersecurity skills is particularly acute.
By leveraging AI and machine learning algorithms to discover and respond to security threats, companies can fill the cybersecurity skills gap without hiring an army of highly skilled, hard-to-find human experts.
An automated, AI-based adaptive trust system can scan millions of data points at a time, and it doesn’t sleep, get tired, or charge overtime. It notices not only that the above employee works from 8 a.m. to 5 p.m. in Chicago, but that they open an app every day around 10 a.m. and download about the same amount of information when they use it.
Biometric authentication factors add even more to the knowledge base, recognizing voice, fingerprints, and device characteristics. If any of the ID or work pattern metrics look abnormal, an alert is triggered in accordance with the company’s security policy.
Adaptive trust doesn’t confine itself to people – it can monitor apps, devices, and data, too. By tracking patterns of data transfers between applications, it creates user profiles that can help stop a breach.
If a hacker is engaged in a spoofing campaign – redirecting users to a scam website – the system immediately spots a difference in the metadata that is generated and alerts the security team to the problem.
If an attacker inserts malware into a site to harvest personal data during online transactions, the system notices a slight delay after users click “Submit,” – a subtle change human workers likely wouldn’t catch, even if they had time to monitor for it.
Whether it’s analyzing human behavior or mechanical processes, an adaptive AI system finds problems faster, stopping breaches in their tracks or limiting the harm they can cause. Organizations that don’t have a security system incorporating AI, analytics, and automated incident response experience data breach costs 95 percent higher than those that do, according to the 2019 Ponemon Institute Cost of a Data Breach study.
In addition to saving organizations time and money and preventing critical data loss, adaptive trust allows employees to be more productive. Once it understands their work habits, it doesn’t have to bug them as much for additional authorizations. The more it learns, the smoother the process becomes.
As more people, apps, and devices connect to the enterprise, outpacing IT’s ability to keep up, organizations need to look beyond traditional security platforms. For obtaining optimal protection, minimal intrusion, and maximum efficiency, the best solution is adaptive trust.
As the Zero Trust approach to cybersecurity gains traction in the enterprise world, many people have come to recognize the term without fully understanding its meaning.
One common misconception: Zero Trust is all about access controls and additional authentication, such as multi-factor authentication.
While these two things help organizations get to a level of Zero Trust, there is more to it: a Zero Trust approach is really an organization-wide architecture. Things aren’t always as they seem, and access controls by themselves are meaningless without a comprehensive, centrally managed infrastructure to back them up.
Let’s consider this – if an employee of an organization has their laptop stolen and their account becomes compromised, the only protective measure the organization can take is controlling access to the device. Whoever is impersonating the employee can now access the infrastructure and anything the identity tied to that account had access to.
Zero Trust: A centralized approach to cybersecurity
Organizations can avoid problems like this by managing and enforcing policies for all identities, devices, and applications centrally and setting automated rules to require additional authentication as needed. With Zero Trust, every activity related to the enterprise must be authenticated and authorized, whether it’s undertaken by a person, a device, or a line of code that drives an internal process.
If a laptop is registered, the company can still require a software token or a fingerprint scan when someone uses it to access sensitive financial information. If the user wants to change or add data, it may be a good idea to add another authentication factor, or to monitor this activity in real time – especially if making changes is not something the person ordinarily does.
The same is true when someone who routinely uses just subsets of customer information tries to download the entire customer database, or if anyone tries to copy product development specifications.
Visibility and control
In today’s world, where devices and applications are expanding rapidly and people often change roles, eliminating every potential security gap is a quixotic ideal. The Zero Trust principle acknowledges that vulnerabilities will always exist, and posits that the best way of dealing with them is to provide visibility into activity across the enterprise ecosystem.
If an event seems out-of-place, an automated alarm is triggered. That may mean alerting a manager or shutting off someone’s access while the security team investigates. By understanding context and having the ability to intervene immediately, organizations can close inevitable gaps as they arise, preventing them from evolving into security breaches.
Perhaps you’re thinking additional authorization measures will frustrate your employees, or event logs a mile long will drive your security team crazy. But when Zero Trust is managed properly, the system recognizes normal employee activity and becomes less intrusive, allowing you to offer workers convenient features like single sign-on and provides them a range of choices for authorization.
Zero Trust’s contextual awareness also helps organize event logs, prioritizing real threats instead of forcing security teams to slog through endless lists of trivialities and false alarms.
The key aspect of Zero Trust is the breadth of its scope. It covers the entire organization, including:
- People: Everyone who interacts with the organization—including vendors, contractors, and IT service accounts—is given an identity and conditional access rights. Conditional, because as we have seen, legitimate access may be used for nefarious purposes, so context and activity must always be considered. If an action seems out of line, additional authorization or monitoring is activated.
- Devices: All endpoints are included, with changes and updates made as they occur to avoid accumulating security gaps.
- Applications: Today’s enterprises operate in a multi-cloud environment, using a host of internal and external apps, many of which interconnect or connect to other outside apps. Zero Trust provides visibility into the dependencies within and among all applications and databases and uses automation to spot irregularities no human could ever keep up with. Enterprise security rules are enforced at all times, even if the apps themselves lack adequate protection. In this way, Zero Trust removes the burden of compliance from employees, devices, and applications and places it on the central automated system.
- Data: With Zero Trust, almost all enterprise data is encrypted. If it ever ends up in the wrong hands, the unauthorized party will not be able to decipher it, even if the user’s access credentials are compromised.
Though we have barely scratched the surface of Zero Trust here, it should be clear that it is a robust, comprehensive, and responsive security architecture extending well beyond access controls. It can be viewed as the evolution of the least privilege model. Zero Trust is strong enough to keep bad actors out, it is also flexible enough to accommodate user preferences and incorporate new people, devices, applications, and data as they flow into and out of the enterprise.
Masergy Shadow IT Discovery: Automatically identify unauthorized SaaS applications
Masergy Shadow IT Discovery immediately scans and identifies all applications, providing clients visibility through the SD-WAN management portal. Until now, IT departments have had to rely on a variety of endpoint security solutions and guesswork to access this information. The time savings and decreased threat exposure will help IT organizations increase their security posture and keep up with the blind spots created by unsanctioned usage.
STEALTHbits StealthINTERCEPT 7.0 strengthens enterprise passwords and AD security
The latest enhancements delivered in StealthINTERCEPT 7.0 aim to provide organizations advanced capabilities to thwart attacks against AD and provide progressive password policy and complexity improvements that boost security without causing poor user and administrator experiences. The solution can now detect successful and failed Kerberos pre-authentication events in order to provide security analysts visibility into nefarious activities.
Micro Focus AD Bridge 2.0: Extending security policies and access controls to cloud-based Linux
Micro Focus AD Bridge 2.0 offers IT administrators the ability to extend Active Directory (AD) controls from on-premises resources, including Windows and Linux devices to the cloud – a solution not previously offered in the marketplace. Organizations can leverage existing infrastructure authentication, security as well as policy, in order to simplify the migration of on-premises Linux Active Directory to the cloud.
DataVisor dEdge: Uncover known and unknown attacks early
DataVisor dEdge is an anti-fraud solution that detects malicious devices in real-time, empowering organizations to uncover known and unknown attacks early, and take action with confidence. dEdge provides complete visibility into digital attacks, generating unique device IDs and accurate fraud scores – no matter how fraudsters manipulate devices.
Micro Focus released Micro Focus AD Bridge 2.0, offering IT administrators the ability to extend Active Directory (AD) controls from on-premises resources, including Windows and Linux devices to the cloud – a solution not previously offered in the marketplace.
With AD Bridge 2.0, organizations can leverage existing infrastructure authentication, security as well as policy, in order to simplify the migration of on-premises Linux Active Directory to the cloud, resulting in fully secured and managed Linux virtual machines in the cloud.
“We have optimized Azure to fully support Linux virtual machines because we recognize that Microsoft users want the freedom to leverage a variety of operating systems in the cloud,” said Boris Baryshnikov, Principal Program Manager, Microsoft.
“Migrating on-premises Linux to Azure provides customers many benefits such as increased resiliency, modernized workplace management, and access to third party services that make Azure a great solution.”
It is estimated that 95% of enterprises use Active Directory to manage security and configuration for their Windows environment. The newest version of Micro Focus’ hybrid policy management solution allows organizations to take advantage of AD configurations and security controls and apply them to cloud-based resources.
The patented technology takes the standard Group Policy Objects (GPOs) already in use by many businesses, and extends them beyond the walls of the corporate network to cover cloud-based resources that are potentially unmanaged today.
“We have customers that are currently managing thousands of GPOs for their on-premises systems and have invested significant time and money in fine tuning and deploying policy to protect their Windows assets,” said Nick Nikols, VP of Strategy at Micro Focus.
“The release of AD Bridge 2.0 helps them achieve that and more by providing even more control over resources such as on-premises and cloud-based Linux.”
As Linux continues its rise to a more dominate cloud deployment solution, it’s critical for organizations that are adopting cloud based-solutions like Azure and Office 365 to implement stricter policy-level controls to cloud-based Linux and Windows systems, providing consistency and peace of mind, while also delivering simplified compliance.
Micro Focus AD Bridge 2.0 joins the growing portfolio of Identity and Access Management solutions offered by Micro Focus and introduces an intuitive web-based administrative console, which includes the ability to view agent versions, applied policies, and other key features, including:
- Linux cloud agent – enables the ability to extend on-premises AD management capabilities to cloud-based Linux resources, improving security and reducing the number of unmanaged identities.
- Web-based console – provides an easy-to-use web console for administrators to manage devices and policies across the enterprise.
- Device management – identifies and manages domain-joined and Linux-joined (both on premises and cloud) devices to improve security and provide better visibility into the AD Bridge infrastructure.
- GPEdit additions – extends the capability of AD Bridge 1.0 policy management. Includes Sudoers (controlled access to privileged accounts and commands), real-time file monitoring, UID/GID mapping extensions, and execute commands limitations and auditing.
“Efficiently managing security and configuration policies on cloud-based Linux resources is a challenge that the marketplace hasn’t been able to effectively solve until now,” said Danny Kim, CTO at Full Armor.
“Partnering with Micro Focus on AD Bridge 2.0 provides a real solution that unifies Windows and Linux resources and centrally manages them from Active Directory.”