Organizations plan to use AI and ML to tackle unknown attacks faster

Wipro published a report which provides fresh insights on how AI will be leveraged as part of defender stratagems as more organizations lock horns with sophisticated cyberattacks and become more resilient.

tackle unknown attacks

Organizations need to tackle unknown attacks

There has been an increase in R&D with 49% of the worldwide cybersecurity related patents filed in the last four years being focussed on AI and ML application. Nearly half the organizations are expanding cognitive detection capabilities to tackle unknown attacks in their Security Operations Center (SOC).

The report also illustrates a paradigm shift towards cyber resilience amid the rise in global remote work. It considers the impact of COVID-19 pandemic on cybersecurity landscape around the globe and provides a path for organizations to adapt with this new normal.

The report saw a global participation of 194 organizations and 21 partner academic, institutional and technology organizations over four months of research.

Global macro trends in cybersecurity

  • Nation state attacks target private sector: 86% of all nation-state attacks fall under espionage category, and 46% of them are targeted towards private companies.
  • Evolving threat patterns have emerged in the consumer and retail sectors: 47% of suspicious social media profiles and domains were detected active in 2019 in these sectors.

Cyber trends sparked by the global pandemic

  • Cyber hygiene proven difficult during remote work enablement: 70% of the organizations faced challenges in maintaining endpoint cyber hygiene and 57% in mitigating VPN and VDI risks.
  • Emerging post-COVID cybersecurity priorities: 87% of the surveyed organizations are keen on implementing zero trust architecture and 87% are planning to scale up secure cloud migration.

Micro trends: An inside-out enterprise view

  • Low confidence in cyber resilience: 59% of the organizations understand their cyber risks but only 23% of them are highly confident about preventing cyberattacks.
  • Strong cybersecurity spend due to board oversight & regulations: 14% of organizations have a security budget of more than 12% of their overall IT budgets.

Micro trends: Best cyber practices to emulate

  • Laying the foundation for a cognitive SOC: 49% of organizations are adding cognitive detection capabilities to their SOC to tackle unknown attacks.
  • Concerns about OT infrastructure attacks increasing: 65% of organizations are performing log monitoring of Operation Technology (OT) and IoT devices as a control to mitigate increased OT Risks.

Meso trends: An overview on collaboration

  • Fighting cyber-attacks demands stronger collaboration: 57% of organizations are willing to share only IoCs and 64% consider reputational risks to be a barrier to information sharing.
  • Cyber-attack simulation exercises serve as a strong wakeup call: 60% participate in cyber simulation exercises coordinated by industry regulators, CERTs and third-party service providers and 79% organizations have dedicated cyber insurance policy in place.

Future of cybersecurity

  • 5G security is the emerging area for patent filing: 7% of the worldwide patents filed in the cyber domain in the last four years have been related to 5G security.

Vertical insights by industry

  • Banking, financial services & insurance: 70% of financial services enterprises said that new regulations are fuelling increase in security budgets, with 54% attributing higher budgets to board intervention.
  • Communications: 71% of organizations consider cloud-hosting risk as a top risk.
  • Consumer: 86% of consumer businesses said email phishing is a top risk and 75% enterprises said a bad cyber event will lead to damaged band reputation in the marketplace.
  • Healthcare & life sciences: 83% of healthcare organizations have highlighted maintaining endpoint cyber hygiene as a challenge, 71% have highlighted that breaches reported by peers has led to increased security budget allocation.
  • Energy, natural resources and utilities: 71% organizations reported that OT/IT Integration would bring new risks.
  • Manufacturing: 58% said that they are not confident about preventing risks from supply chain providers.

Bhanumurthy B.M, President and Chief Operating Officer, Wipro said, “There is a significant shift in global trends like rapid innovation to mitigate evolving threats, strict data privacy regulations and rising concern about breaches.

“Security is ever changing and the report brings more focus, enablement, and accountability on executive management to stay updated. Our research not only focuses on what happened during the pandemic but also provides foresight toward future cyber strategies in a post-COVID world.”

ML tool identifies domains created to promote fake news

Academics at UCL and other institutions have collaborated to develop a machine learning tool that identifies new domains created to promote false information so that they can be stopped before fake news can be spread through social media and online channels.

promote fake news

To counter the proliferation of false information it is important to move fast, before the creators of the information begin to post and broadcast false information across multiple channels.

How does it work?

Anil R. Doshi, Assistant Professor for the UCL School of Management, and his fellow academics set out to develop an early detection system to highlight domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.

Doshi commented: “Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early.

“By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”

By applying a machine-learning model to domain registration data, the tool was able to correctly identify 92 percent of the false information domains and 96.2 percent of the non-false information domains set up in relation to the 2016 US election before they started operations.

Why should it be used?

The researchers propose that their tool should be used to help regulators, platforms, and policy makers proceed with an escalated process in order to increase monitoring, send warnings or sanction them, and decide ultimately, whether they should be shut down.

The academics behind the research also call for social media companies to invest more effort and money into addressing this problem which is largely facilitated by their platforms.

Doshi continued “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey.

“We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening.

“Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”

The research is ongoing in recognition that the environment is constantly evolving and while the tool works well now, the bad actors will respond to it. This underscores the need for constant and ongoing innovation and research in this area.

A new threat matrix outlines attacks against machine learning systems

A report published last year has noted that most attacks against artificial intelligence (AI) systems are focused on manipulating them (e.g., influencing recommendation systems to favor specific content), but that new attacks using machine learning (ML) are within attackers’ capabilities.

attacks machine learning systems

Microsoft now says that attacks on machine learning (ML) systems are on the uptick and MITRE notes that, in the last three years, “major companies such as Google, Amazon, Microsoft, and Tesla, have had their ML systems tricked, evaded, or misled.” At the same time, most businesses don’t have the right tools in place to secure their ML systems and are looking for guidance.

Experts at Microsoft, MITRE, IBM, NVIDIA, the University of Toronto, the Berryville Institute of Machine Learning and several other companies and educational organizations have therefore decided to create the first version of the Adversarial ML Threat Matrix, to help security analysts detect and respond to this new type of threat.

What is machine learning (ML)?

Machine learning is a subset of artificial intelligence (AI). It is based on computer algorithms that ingest “training” data and “learn” from it, and finally deliver predictions, decisions, or accurately classify things.

Machine learning algorithms are used for tasks like identifying spam, detecting new threats, predicting user preferences, performing medical diagnoses, and so on.

Security should be built in

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE’s Decision Science research programs, says that we’re now at the same stage with AI as we were with the internet in the late 1980s, when people were just trying to make the internet work and when they weren’t thinking about building in security.

We can learn from that mistake, though, and that’s one of the reasons the Adversarial ML Threat Matrix has been created.

“With this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” he noted.

Also, the matrix will help them think holistically and spur better communication and collaboration across organizations by giving a common language or taxonomy of the different vulnerabilities, he says.

The Adversarial ML Threat Matrix

“Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle,” MITRE noted.

The matrix has been modeled on the MITRE ATT&CK framework.

attacks machine learning systems

The group has demonstrated how previous attacks – whether by researchers, read teams or online mobs – can be mapped to the matrix.

They also stressed that it’s going to be routinely updated as feedback from the security and adversarial machine learning community is received. They encourage contributors to point out new techniques, propose best (defense) practices, and share examples of successful attacks on machine learning (ML) systems.

“We are especially excited for new case-studies! We look forward to contributions from both industry and academic researchers,” MITRE concluded.

SecOps teams turn to next-gen automation tools to address security gaps

SOCs across the globe are most concerned with advanced threat detection and are increasingly looking to next-gen automation tools like AI and ML technologies to proactively safeguard the enterprise, Micro Focus reveals.

next-gen automation tools

Growing deployment of next-gen tools and capabilities

The report’s findings show that over 93 percent of respondents employ AI and ML technologies with the leading goal of improving advanced threat detection capabilities, and that over 92 percent of respondents expect to use or acquire some form of automation tool within the next 12 months.

These findings indicate that as SOCs continue to mature, they will deploy next-gen tools and capabilities at an unprecedented rate to address gaps in security.

“The odds are stacked against today’s SOCs: more data, more sophisticated attacks, and larger surface areas to monitor. However, when properly implemented, AI technologies such as unsupervised machine learning, are helping to fuel next-generation security operations, as evidenced by this year’s report,” said Stephan Jou, CTO Interset at Micro Focus.

“We’re observing more and more enterprises discovering that AI and ML can be remarkably effective and augment advanced threat detection and response capabilities, thereby accelerating the ability of SecOps teams to better protect the enterprise.”

Organizations relying on the MITRE ATT&K framework

As the volume of threats rise, the report finds that 90 percent of organizations are relying on the MITRE ATT&K framework as a tool for understanding attack techniques, and that the most common reason for relying on the knowledge base of adversary tactics is for detecting advanced threats.

Further, the scale of technology needed to secure today’s digital assets means SOC teams are relying more heavily on tools to effectively do their jobs.

With so many responsibilities, the report found that SecOps teams are using numerous tools to help secure critical information, with organizations widely using 11 common types of security operations tools and with each tool expected to exceed 80% adoption in 2021.

Key observations

  • COVID-19: During the pandemic, security operations teams have faced many challenges. The biggest has been the increased volume of cyberthreats and security incidents (45 percent globally), followed by higher risks due to workforce usage of unmanaged devices (40 percent globally).
  • Most severe SOC challenges: Approximately 1 in 3 respondents cite the two most severe challenges for the SOC team as prioritizing security incidents and monitoring security across a growing attack surface.
  • Cloud journeys: Over 96 percent of organizations use the cloud for IT security operations, and on average nearly two-thirds of their IT security operations software and services are already deployed in the cloud.

Split-Second Phantom Images Fool Autopilots

Split-Second Phantom Images Fool Autopilots

Researchers are tricking autopilots by inserting split-second images into roadside billboards.

Researchers at Israel’s Ben Gurion University of the Negev … previously revealed that they could use split-second light projections on roads to successfully trick Tesla’s driver-assistance systems into automatically stopping without warning when its camera sees spoofed images of road signs or pedestrians. In new research, they’ve found they can pull off the same trick with just a few frames of a road sign injected on a billboard’s video. And they warn that if hackers hijacked an internet-connected billboard to carry out the trick, it could be used to cause traffic jams or even road accidents while leaving little evidence behind.

[…]

In this latest set of experiments, the researchers injected frames of a phantom stop sign on digital billboards, simulating what they describe as a scenario in which someone hacked into a roadside billboard to alter its video. They also upgraded to Tesla’s most recent version of Autopilot known as HW3. They found that they could again trick a Tesla or cause the same Mobileye device to give the driver mistaken alerts with just a few frames of altered video.

The researchers found that an image that appeared for 0.42 seconds would reliably trick the Tesla, while one that appeared for just an eighth of a second would fool the Mobileye device. They also experimented with finding spots in a video frame that would attract the least notice from a human eye, going so far as to develop their own algorithm for identifying key blocks of pixels in an image so that a half-second phantom road sign could be slipped into the “uninteresting” portions.

The paper:

Abstract: In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

Sidebar photo of Bruce Schneier by Joe MacInnis.

Most cybersecurity pros believe automation will make their jobs easier

Despite 88% of cybersecurity professionals believing automation will make their jobs easier, younger staffers are more concerned that the technology will replace their roles than their veteran counterparts, according to a research by Exabeam.

cybersecurity automation jobs

Overall, satisfaction levels continued a 3-year positive trend, with 96% of respondents indicating they are happy with role and responsibilities and 87% reportedly pleased with salary and earnings. Additionally, there was improvement in gender diversity with female respondents increasing from 9% in 2019 to 21% this year.

“The concern for automation among younger professionals in cybersecurity was surprising to us. In trying to understand this sentiment, we could partially attribute it to lack of on-the-job training using automation technology,” said Samantha Humphries, security strategist at Exabeam.

“As we noted earlier this year in our State of the SOC research, ambiguity around career path or lack of understanding about automation can have an impact on job security. It’s also possible that this is a symptom of the current economic climate or a general lack of experience navigating the workforce during a global recession.”

AI and ML: A threat to job security?

Of respondents under the age of 45, 53% agreed or strongly agreed that AI and ML are a threat to their job security. This is contrasted with just 25% of respondents 45 and over who feel the same, possibly indicating that subsets of security professionals in particular prefer to write rules and manually investigate.

Interestingly, when asked directly about automation software, 89% of respondents under 45 years old believed it would improve their jobs, yet 47% are still threatened by its use. This is again in contrast with the 45 and over demographic, where 80% believed automation would simplify their work, and only 22% felt threatened by its use.

Examining the sentiments around automation by region, 47% of US respondents were concerned about job security when automation software is in use, as well as SG (54%), DE (42%), AUS (40%) and UK (33%).

In the survey, which drew insights from professionals throughout the US, the UK, AUS, Canada, India and the Netherlands, only 10% overall believed that AI and automation were a threat to their jobs.

On the flip side, there were noticeable increases in job approval across the board, with an upward trend in satisfaction around role and responsibilities (96%), salary (87%) and work/life balance (77%).

Diversity showing positive signs of improvement

When asked what else they enjoyed about their jobs, respondents listed working in an environment with professional growth (15%) as well as opportunities to challenge oneself (21%) as top motivators.

53% reported jobs that are either stressful or very stressful, which is down from last year (62%). Interestingly, despite being among those that are generally threatened by automation software, 100% of respondents aged 18-24 reported feeling secure in their roles and were happiest with their salaries (93%).

Though the number of female respondents increased this year, it remains to be seen whether this will emerge as a trend. This year’s male respondents (78%) are down 13% from last year (91%).

In 2019, nearly 41% were in the profession for at least 10 years or more. This year, a larger percentage (83%) have 10 years or less, and 34% have been in the cybersecurity industry for five years or less. Additionally, one-third do not have formal cybersecurity degrees.

“There is evidence that automation and AI/ML are being embraced, but this year’s survey exposed fascinating generational differences when it comes to professional openness and using all available tools to do their jobs,” said Phil Routley, senior product marketing manager, APJ, Exabeam.

“And while gender diversity is showing positive signs of improvement, it’s clear we still have a very long way to go in breaking down barriers for female professionals in the security industry.”

In the era of AI, standards are falling behind

According to a recent study, only a minority of software developers are actually working in a software development company. This means that nowadays literally every company builds software in some form or another.

standards development

As a professional in the field of information security, it is your task to protect information, assets, and technologies. Obviously, the software built by or for your company that is collecting, transporting, storing, processing, and finally acting upon your company’s data, is of high interest. Secure development practices should be enforced early on and security must be tested during the software’s entire lifetime.

Within the (ISC)² common body of knowledge for CISSPs, software development security is listed as an individual domain. Several standards and practices covering security in the Software Development Lifecycle (SDLC) are available: ISO/IEC 27024:2011, ISO/IEC TR 15504, or NIST SP800-64 Revision 2, to name some.

All of the above ask for continuous assessment and control of artifacts on the source-code level, especially regarding coding standards and Common Weakness Enumerations (CWE), but only briefly mention static application security testing (SAST) as a possible way to address these issues. In the search for possible concrete tools, NIST provides SP 500-268 v1.1 “Source Code Security Analysis Tool Function Specification Version 1.1”.

In May 2019, NIST withdrew the aforementioned SP800-64 Rev2. NIST SP 500-268 was published over nine years ago. This seems to be symptomatic for an underlying issue we see: the standards cannot keep up with the rapid pace of development and change in the field.

A good example is the dawn of the development language Rust, which addresses a major source of security issues presented by the classically used language C++ – namely memory management. Major players in the field such as Microsoft and Google saw great advantages and announced that they would focus future developments towards Rust. While the standards mention development languages superior to others, neither the mechanisms used by Rust nor Rust itself is mentioned.

In the field of Static Code Analysis, the information in NIST SP 500-268 is not wrong, but the paper simply does not mention advances in the field.

Let us briefly discuss two aspects: First, the wide use of open source software gave us insight into a vast quantity of source code changes and the reasoning behind them (security, performance, style). On top of that, we have seen increasing capacities of CPU power to process this data, accompanied by algorithmic improvements. Nowadays, we have a large lake of training data available. To use our company as an example, in order to train our underlying model for C++ alone, we are scanning changes in over 200,000 open source projects with millions of files containing rich history.

Secondly, in the past decade, we’ve witnessed tremendous advances in machine learning. We see tools like GPT-3 and their applications in source code being discussed widely. Classically, static source code analysis was the domain of Symbolic AI—facts and rules applied to source code. The realm of source code is perfectly suited for this approach since software source code has a well-defined syntax and grammar. The downside is that these rules were developed by engineers, which limits the pace in which rules can be generated. The idea would be to automate the rule construction by using machine learning.

Recently, we see research in the field of machine learning being applied to source code. Again, let us use our company as an example: By using the vast amount of changes in open source, our system looks out for patterns connected to security. It presents possible rules to an engineer together with found cases in the training set—both known and fixed, as well as unknown.

Also, the system supports parameters in the rules. Possible values for these parameters are collected by the system automatically. As a practical example, taint analysis follows incoming data to its use inside of the application to make sure the data is sanitized before usage. The system automatically learns possible sources, sanitization, and sink functions.

Back to the NIST Special Papers: With the withdrawal of SP 800-64 Rev 2, users were pointed to NIST SP 800-160 Vol 1 for the time being until a new, updated white paper is published. This was at the end of May 2019. The nature of these papers is to only describe high-level best practices, list some examples, and stay rather vague in concrete implementation. Yet, the documents are the basis for reviews and audits. Given the importance of the field, it seems as if a major component is missing. It is also time to think about processes that would help us to keep up with the pace of technology.

Intelligent processes and tech increase enterprises’ competitiveness

Enterprises of the future will be built on a foundation of artificial intelligence (AI), analytics, machine learning, deep learning and automation, that are central to solving business problems and driving innovation, Wipro finds.

intelligent processes

Most businesses consider AI to be critical to improve operational efficiency, reduce employee time on manual tasks, and enhance the employee and customer experience.

The report examines the current landscape and shows the challenges and the driving factors for businesses to become truly intelligent enterprises. Wipro surveyed 300 respondents in UK and US across key industry sectors like financial services, healthcare, technology, manufacturing, retail and consumer goods.

The report highlights that while collecting data is critical, the ability to combine this with a host of technologies to leverage insights creates an intelligent enterprise. Organizations that fast-track adoption of intelligent processes and technologies stand to gain an immediate competitive advantage over their counterparts.

Key findings

  • While 80% of organizations recognize the importance of being intelligent, only 17% would classify their organizations as an Intelligent Enterprise.
  • 98% of those surveyed believe that being an Intelligent Enterprise yields benefits to organizations. The most important ones being improved customer experience, faster business decisions and increased organizational agility.
  • 91% of organizations feel there are data barriers towards being an Intelligent Enterprise, with security, quality and seamless integration being of utmost concern.
  • 95% of business leaders surveyed see AI as critical to being Intelligent Enterprises, yet, currently, only 17% can leverage AI across the entire organization.
  • 74% of organizations consider investment in technology as the most likely enabler for an Intelligent Enterprise, however 42% of them think that this must be complemented with efforts to re-skill workforce.

Jayant Prabhu, VP & Head – Data, Analytics & AI, Wipro said, “Organizations now need new capabilities to navigate the current challenges. The report amplifies the opportunity to gain a first-mover advantage to being Intelligent.

“The ability to take productive decisions depends on an organization’s ability to generate accurate, fast and actionable intelligence. Successful organizations are those that quickly adapt to the new technology landscape to transform into an Intelligent Enterprise.”

Are today’s organizations ready for the data age?

67% of business and IT managers expect the sheer quantity of data to grow nearly five times by 2025, a Splunk survey reveals.

data age

The research shows that leaders see the significant opportunity in this explosion of data and believe data is extremely or very valuable to their organization in terms of: overall success (81%), innovation (75%) and cybersecurity (78%).

81% of survey respondents believe data to be very or highly valuable yet 57% fear that the volume of data is growing faster than their organizations’ ability to keep up.

“The aata age is here. We can now quantify how data is taking center stage in industries around the world. As this new research demonstrates, organizations understand the value of data, but are overwhelmed by the task of adjusting to the many opportunities and threats this new reality presents,” said Doug Merritt, President and CEO, Splunk.

“There are boundless opportunities for organizations willing to quickly learn and adapt, embrace new technologies and harness the power of data.”

The data age has been accelerated by emerging technologies powered by, and contributing to, exponential data growth. Chief among these emerging technologies are Edge Computing, 5G networking, IoT, AI/ML, AR/VR and Blockchain.

It’s these very same technologies 49% of those surveyed expect to use to harness the power of data, but across technologies, on average, just 42% feel they have high levels of understanding of all six.

Data is valuable, and data anxiety is real

To thrive in this new age, every organization needs a complete view of its data — real-time insight, with the ability to take real-time action. But many organizations feel overwhelmed and unprepared. The study quantifies the emergence of a data age as well as the recognition that organizations have some work to do in order to use data effectively and be successful.

  • Data is extremely or very valuable to organizations in terms of: overall success (81%), innovation (75%) and cybersecurity (78%).
  • And yet, 66% of IT and business managers report that half or more of their organizations’ data is dark (untapped, unknown, unused) — a 10% increase over the previous year.
  • 57% say the volume of data is growing faster than their organizations’ ability to keep up.
  • 47% acknowledge their organizations will fall behind when faced with rapid data volume growth.

Some industries are more prepared than others

The study quantifies the emergence of a data age and the adoption of emerging technologies across industries, including:

  • Across industries, IoT has the most current users (but only 28%). 5G has the fewest and has the shortest implementation timeline at 2.6 years.
  • Confidence in understanding of 5G’s potential varies: 59% in France, 62% in China and only 24% in Japan.
  • For five of the six technologies, financial services leads in terms of current development of use cases. Retail comes second in most cases, though retailers lag notably in adoption of AI.
  • 62% of healthcare organizations say that half or more of their data is dark and that they struggle to manage and leverage data.
  • The public sector lags commercial organizations in adoption of emerging technologies.
  • Manufacturing leaders predict growth in data volume (78%) than in any other industry; 76% expect the value of data to continue to rise.

Some countries are more prepared than others

The study also found that countries seen as technology leaders, like the U.S. and China, are more likely to be optimistic about their ability to harness the opportunities of the data age.

  • 90% of business leaders from China expect the value of data to grow. They are by far the most optimistic about the impact of emerging technologies, and they are getting ready. 83% of Chinese organizations are prepared, or are preparing, for rapid data growth compared to just 47% across all regions.
  • U.S. leaders are the second most confident in their ability to prepare for rapid data growth, with 59% indicating that they are at least somewhat confident.
  • In France, 59% of respondents say that no one in their organization is having conversations about the impact of the data age. Meanwhile, in Japan 67% say their organization is struggling to stay up to date, compared to the global average of 58%.
  • U.K. managers report relatively low current usage of emerging technologies but are optimistic about plans to use them in the future. For example, just 19% of U.K. respondents say they are currently using AI/ML technologies, but 58% say they will use them in the near future.

Facing gender bias in facial recognition technology

In the 1960s, Woodrow W. Bledsoe created a secret program that manually identified points on a person’s face and compared the distances between these coordinates with other images.

facial recognition bias

Facial recognition technology has come a long way since then. The field has evolved quickly and software can now automatically process staggering amounts of facial data in real time, dramatically improving the results (and reliability) of matching across a variety of use cases.

Despite all of the advancements we’ve seen, many organizations still rely on the same algorithm used by Bledsoe’s database – known as “k-nearest neighbors” or k-NN. Since each face has multiple coordinates, a comparison of these distances over millions of facial images requires significant data processing. The k-NN algorithm simplifies this process and makes matching these points easier by considerably reducing the data set. But that’s only part of the equation. Facial recognition also involves finding the location of a feature on a face before evaluating it. This requires a different algorithm such as HOG (we’ll get to it later).

The problem

The algorithms used for facial recognition today rely heavily on machine learning (ML) models, which require significant training. Unfortunately, the training process can result in biases in these technologies. If the training doesn’t contain a representative sample of the population, ML will fail to correctly identify the missed population.

While this may not be a significant problem when matching faces for social media platforms, it can be far more damaging when the facial recognition software from Amazon, Google, Clearview AI and others is used by government agencies and law enforcement.

Previous studies on this topic found that facial recognition software suffers from racial biases, but overall, the research on bias has been thin. The consequences of such biases can be dire for both people and companies. Further complicating matters is the fact that even small changes to one’s face, hair or makeup can impact a model’s ability to accurately match faces. If not accounted for, this can create distinct challenges when trying to leverage facial recognition technology to identify women, who generally tend to use beauty and self-care products more than men.

Understanding sexism in facial recognition software

Just how bad are gender-based misidentifications? Our team at WatchGuard conducted some additional facial recognition research, looking solely at gender biases to find out. The results were eye-opening. The solutions we evaluated was misidentifying women 18% more often than men.

You can imagine the terrible consequences this type of bias could generate. For example, a smartphone relying on face recognition could block access, a police officer using facial recognition software could mistakenly identify an innocent bystander as a criminal, or a government agency might call in the wrong person for questioning based on a false match. The list goes on. The reality is that the culprit behind these issues is bias within model training that creates biases in the results.

Let’s explore how we uncovered these results.

Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards.

Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly.

For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time.

Amazon Rekognition correctly identified all pictures we provided. However, when we looked more closely at the data provided, our team saw a wider distribution of the similarities in female faces than in males. We saw more female faces with higher similarities then men and more female faces with less similarities than men (this actually matches a recent study performed around the same time).

What does this mean? Essentially it means a female face not found in the database is more likely to provide a false match. Also, because of the lower similarity in female faces, our team was confident that we’d see more errors in identifying female faces over male if given enough images with faces.

Amazon Rekognition gave accurate results but lacked in consistency and precision between male and female faces. Male faces on average were 99.06% similar, but female faces on average were 98.43% similar. This might not seem like a big variance, but the gap widened when we looked at the outliers – a standard deviation of 1.64 for males versus 2.83 for females. More female faces fall farther from the average then male faces, meaning female false match is far more likely than the 0.6% difference based on our data.

Dlib didn’t perform as well. On average, Dlib misidentified female faces more than male, leading to an average rate of 5% more misidentified females. When comparing faces using the slower HOG, the differences grew to 18%. Of interest, our team found that on average, female faces have higher similarity scores then male when using Dlib, but like Amazon Rekognition, also have a larger spectrum of similarity scores leading to the low results we found in accuracy.

Tackling facial recognition bias

Unfortunately, facial recognition software providers struggle to be transparent when it comes to the efficacy of their solutions. For example, our team didn’t find any place in Amazon’s documentation in which users could review the processing results before the software made a positive or negative match.

Unfortunately, this assumption of accuracy (and lack of context from providers) will likely lead to more and more instances of unwarranted arrests, like this one. It’s highly unlikely that facial recognition models will reach 100% accuracy anytime soon, but industry participants must focus on improving their effectiveness nonetheless. Knowing that these programs contain biases today, law enforcement and other organizations should use them as one of many tools – not as a definitive resource.

But there is hope. If the industry can honestly acknowledge and address the biases in facial recognition software, we can work together to improve model training and outcomes, which can help reduce misidentifications not only based on gender, but race and other variables, too.

How to implement expedited security strategies during a crisis

Cybersecurity professionals know all too well that crises tend to breed new threats to organizational security. The current COVID-19 pandemic is evidence of this. Health agencies are being attacked, massive phishing operations are underway, and security flaws in leading communications platforms are coming to light.

Even on an individual basis, people are more susceptible to scams, fraud and manipulation in times of fear. From January 1 until today, the US Federal Trade Commission has received over 124,140 fraud and ID theft reports related to COVID-19, with people reporting losses upwards of $80.3 million dollars.

Despite the presence of a robust cybersecurity infrastructure, enterprise systems are not battle-tested to secure an entire workforce that is now based at home. Cybersecurity analysts can confirm that to properly manage a remote digital workforce, an enterprise should focus its security measures on three key pillars:

1. Doubling up on identity access management: Enacting multifactor authentication and cycling passwords are critically important during times of crisis when phishing attempts spike and malicious hackers have an avenue into company data and resources.

2. Broaden connectivity awareness: Shield employees from parallel Wi-Fi networks set up by bad actors by increasing IT awareness and broadening VPN access. Unaware employees that connect to the parallel (rogue) network by mistake can put the company at risk.

3. Reassess policies and procedures: Companies operating today are in unfamiliar territory and should always be reassessing current cyber risk policies and procedures in order to identify and evaluate and identify risks associated with potential threats and security weaknesses.

Overcoming security challenges in a crisis

As we’ve seen with COVID-19, a crisis can disrupt business significantly. Without plans for how to deal with such a disruption, businesses will face an overwhelming challenge of managing and securing network infrastructure as operations shift to accommodate changes within the organization. It is paramount that enterprises determine ahead of time what to do differently, should a time of crisis rear its head. This also translates into a major opportunity for security teams that can proactively begin to analyze current security measures and develop a business plan of what the future might look like.

As part of this plan, automation and artificial intelligence (AI) should take center-stage. Most modern networks are growing far too complex for humans to secure manually, and fighting a growing number of threats requires automated operational workflows and integrated threat intelligence.

In addition, a high degree of system integration with these technologies enables greater collaboration between security analysts, no matter where they’re located. It is also important to embed threat intelligence across multiple vectors (e.g. endpoints, privileged user access, machine communications), so that Communication Service Providers (CSPs) can detect and analyze potential threats in real time.

Security teams that have integrated their networks with automated, cognitively intelligent software, whether it be AI or machine learning (ML), have already been privy to its benefits. With access to dynamic scanning for threats and insight into potential vulnerabilities, teams can tackle challenges quickly, with more visibility and effectiveness.

These new software capabilities enable security operations teams to:

  • Oversee, manage and limit access to key operational systems and assets within the network to ensure that remote employees do not inadvertently or deliberately misuse privileged information.
  • Identify network vulnerabilities automatically, detect threats sooner, and reduce the number of false positives, saving time and preventing alert fatigue.
  • Flag and respond immediately to cyberattacks, minimizing the time needed to address each incident and the overall impact.

Automation and cognitive intelligence are critical to guarding enterprise infrastructure against scams, spear-phishing and zero-day attacks that can evade traditional signature-based security. By adopting these capabilities today, CSPs can set themselves up for longer-term networking success. With the rise of 5G, implementing strong security policies and procedures for complex networks has become more critical than ever. Through software that utilizes automation, AI and ML, operators can provide end-to-end quality across a diverse range of security use cases and business models in 5G.

Integrated cloud-native security platforms can overcome limitations of traditional security products

To close security gaps caused by rapidly changing digital ecosystems, organizations must adopt an integrated cloud-native security platform that incorporates artificial intelligence, automation, intelligence, threat detection and data analytics capabilities, according to 451 Research.

cloud-native security platforms

Cloud-native security platforms are essential

The report clearly defines how to create a scalable, adaptable, and agile security posture built for today’s diverse and disparate IT ecosystems. And it warns that legacy approaches and MSSPs cannot keep up with the speed of digital transformation.

  • Massive change is occurring. Over 97 percent of organizations reported they are underway with, or expecting, digital transformation progress in the next 24 months, and over 41 percent are allocating more than 50 percent of their IT budgets to projects that grow and transform the business.
  • Security platforms enable automation and orchestration capabilities across the entire IT stack, streamlining and optimizing security operations, improving productivity, enabling higher utilization of assets, increasing the ROI of security investments and helping address interoperability challenges created by isolated, multi-vendor point products.
  • Threat-driven and outcome-based security platforms address the full attack continuum, compared with legacy approaches that generally focus on defensive blocking of a single vector.
  • Modern security platforms leverage AI and ML to solve some of the most prevalent challenges for security teams, including expertise shortages, alert fatigue, fraud detection, behavioral analysis, risk scoring, correlating threat intelligence, detecting advanced persistent threats, and finding patterns in increasing volumes of data.
  • Modern security platforms are positioned to deliver real-time, high-definition visibility with an unobstructed view of the entire IT ecosystem, providing insights into the company’s assets, attack surface, risks and potential threats and enabling rapid response and threat containment.

451 Senior Analyst Aaron Sherrill noted, “The impact of an ever-evolving IT ecosystem combined with an ever-evolving threat landscape can be overwhelming to even the largest, most well-funded security teams, including those at traditional MSSPs.

“Unfortunately, a web of disparate and siloed security tools, a growing expertise gap and an overwhelming volume of security events and alerts continue to plague internal and service provider security teams of every size.

“The consequences of these challenges are vast, preventing security teams from gaining visibility, scaling effectively, responding rapidly and adapting quickly. Today’s threat and business landscape demands new approaches and new technologies.”

How to deliver effective cybersecurity today

“Delivering effective cybersecurity today requires being able to consume a growing stream of telemetry and events from a wide range of signal sources,” said Dustin Hillard, CTO, eSentire.

“It requires being able to process that data to identify attacks while avoiding false positives and negatives. It requires equipping a team of expert analysts and threat hunters with the tools they need to investigate incidents and research advanced, evasive attacks.

“Most importantly, it requires the ability to continuously upgrade detection and defenses. These requirements demand changing the technology foundations upon which cybersecurity solutions are built—moving from traditional security products and legacy MSSP services to modern cloud-native platforms.”

Sherrill further noted, “Cloud-native security platforms optimize the efficiency and effectiveness of security operations by hiding complexity and bringing together disparate data, tools, processes, workflows and policies into a unified experience.

“Infused with automation and orchestration, artificial intelligence and machine learning, big data analytics, multi-vector threat detection, threat intelligence, and machine and human collaboration, cloud-native security platforms can provide the vehicle for scalable, adaptable and agile threat detection, hunting, and response. And when combined with managed detection and response services, organizations are able to quickly bridge expertise and resource gaps and attain a more comprehensive and impactful approach to cybersecurity.”

Researchers develop new learning algorithm to boost AI efficiency

The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of AI, especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.

AI efficiency

TU Graz computer scientists Robert Legenstein and Wolfgang Maass (from left) © Lunghammer – TU Graz

Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the energy of a supercomputer. One of the reasons for this is the efficient transfer of information between neurons in the brain. Neurons send short electrical impulses (spikes) to other neurons – but, to save energy, only as often as absolutely necessary.

Event-based information processing

A working group led by two computer scientists Wolfgang Maass and Robert Legenstein of TU Graz has adopted this principle in the development of the new machine learning algorithm e-prop (short for e-propagation).

Researchers at the Institute of Theoretical Computer Science, which is also part of the European lighthouse project Human Brain Project, use spikes in their model for communication between neurons in an artificial neural network.

The spikes only become active when they are needed for information processing in the network. Learning is a particular challenge for such less active networks, since it takes longer observations to determine which neuron connections improve network performance.

Previous methods achieved too little learning success or required enormous storage space. E-prop now solves this problem by means of a decentralized method copied from the brain, in which each neuron documents when its connections were used in a so-called e-trace (eligibility trace).

The method is roughly as powerful as the best and most elaborate other known learning methods.

Online instead of offline

With many of the machine learning techniques currently in use, all network activities are stored centrally and offline in order to trace every few steps how the connections were used during the calculations.

However, this requires a constant data transfer between the memory and the processors – one of the main reasons for the excessive energy consumption of current AI implementations. e-prop, on the other hand, works completely online and does not require separate memory even in real operation – thus making learning much more energy efficient.

Driving force for neuromorphic hardware

Maass and Legenstein hope that e-prop will drive the development of a new generation of mobile learning computing systems that no longer need to be programmed but learn according to the model of the human brain and thus adapt to constantly changing requirements.

The goal is to no longer have these computing systems learn energy-intensively exclusively via a cloud, but to efficiently integrate the greater part of the learning ability into mobile hardware components and thus save energy.

First steps to bring e-prop into the application have already been made. For example, the TU Graz team is working together with the Advanced Processor Technologies Research Group (APT) of the University of Manchester in the Human Brain Project to integrate e-prop into the neuromorphic SpiNNaker system, which has been developed there.

At the same time, TU Graz is working with researchers from the semiconductor manufacturer Intel to integrate the algorithm into the next version of Intel’s neuromorphic chip Loihi.

Key cybersecurity industry challenges in the next five years

What key challenges will the cybersecurity industry be dealing with in the next five years?

cybersecurity industry challenges

Pete Herzog, Managing Director at ISECOM, is so sure that artificial intelligence could be the biggest security problem to solve and the biggest answer to the privacy problem that he cofounded a company, Urvin.ai, with an eclectic group of coders and scientists to explore this.

AI (and machine learning with it) is like a naive child that trusts what you tell it, and is therefore susceptible to fraud, abuse, and tricks, he says. However, it is also like that stubborn, no-bullshit friend who is always going to tell it to you straight.

“From a privacy perspective, AI that controls your personal identity data and medical records will be sure to only give that information to who you tell it to. It has no interest in gossiping with its neighbors about you, and has no greed, vanity, or confirmation bias. We should harness that for protecting our identities and improve how we share it,” he told Help Net Security.

“From a security perspective it has a lot to learn about trust. Or rather, we have a lot to learn on how to program it to trust. It’s the newest, shiniest version of garbage in / garbage out if we don’t learn from our mistakes. At ISECOM we are spending a lot of effort on how we can make security tests for AI and learning how it fits into the OSSTMM framework as a new channel alongside Data Networks, Wireless, Physical, Human, Telecommunications, and Applications.”

Setting up ISECOM

Herzog and his wife Marta Barceló founded the Institute for Security and Open Methodologies in 2001.

ISECOM is a non-profit, open source research organization that maintains the Open Source Security Testing Methodology Manual (OSSTMM), Hacker Highschool (a cybersecurity curriculum for teens in high school) and a security certification authority, all the while operating as a specialty security boutique for securing iconic places that can’t be secured with traditional security products.

Before that they were cybersecurity consultants, so the switch to business owners was a drastic one.

“We jumped full in, no money, and had to find customers from day one. And let me tell you, keeping the connoisseurs of FOSS as happy as the veterans of military-grade security is a balancing act that nobody will get right all of the time,” he explained the challenges they faced.

“With age I learned perspective and humility. And between that and carefully picking my fights I probably protected both the brand and my sanity in the long run.”

In the last decade or so, Herzog also worked in parallel as a security analyst, writer, advisor or CISO with some well and lesser known security companies.

Cybersecurity industry problems

With all these experiences to draw on, we wondered what’s his opinion on the cybersecurity industry as a whole.

He believes one of the problems is the extreme fragmentation of what makes security.

“This fragmentation of specific skills and specific technology creates a differentiation and demand for niche products that focus on one, specific thing. Yet you’re supposed to implement it all, which entails hiring all the people and buying all the products to do it all. Consultants, trainers, universities, and government organizations then follow the crowd on the ‘more is better’ security and this fractures the market more and more until it seems you can’t be secure unless you have the blue spiral thing to stop the blue spiral packets,” he explained.

“Basic security analysis has you making decisions on at least 16 different things for each connection allowed, and a typical organization has thousands of connections to the outside and hundreds of thousands inside. Add web and mobile apps to the mix and you push the number up exponentially. Therefore, even the basic stuff is complicated and to do it thoroughly is exhausting – which is why we buy products to help. But if they fracture the products into thousands of little pieces of technology and operations all with special names we need to continuously re-learn then we’re back to it being as bad as not having the products at all. And that’s what’s wrong with the cybersecurity industry at the moment: we really are confusing the hell out of people as to what they actually need to have and do to be secure. It’s so bad that you can’t buy a penetration test today and know what you’ll get. Imagine buying an oil change like that! It’s ridiculous, confusing, and hurts everyone.”

He doesn’t assign any blame on cybersecurity salespeople, though.

“They see the pain their customers go through and how badly they need security. From their perspective it’s like they see the breach already happening, just really slowly – and they don’t want to have to see another breach. Additionally, everyone working in cybersecurity knows that each breach gives more resources to an enemy and eventually it’s overwhelming for everyone, even the salespeople,” he noted.

He says that the cybersecurity industry has room for more innovation, but that the real problem is not a general lack of it, but the fact that attackers have at their disposal such a huge number of attack combinations that a product-based defense today is not enough. And cyber hygiene can only can somewhat reduce the number of available attack types but not enough to help the overburdened security staff secure everything.

Finally, he believes that people should not be a link in the security chain.

“People are our assets, not our security. The truth is that there is nothing that can’t be made more secure by removing the person from the process, so plan for them not being a link in your security chain and you’ll be more secure,” he concluded.

Does analyzing employee emails run afoul of the GDPR?

A desire to remain compliant with the European Union’s General Data Protection Regulation (GDPR) and other privacy laws has made HR leaders wary of any new technology that digs too deeply into employee emails. This is understandable, as GDPR non-compliance pay lead to stiff penalties.

analyzing employee emails

At the same time, new technologies are applying artificial intelligence (AI) and machine learning (ML) to solve HR problems like analyzing employee data to help with hiring, completing performance reviews or tracking employee engagement. This has great potential for helping businesses coach and empower employees (and thus help them retain top talent), but these tools often analyze employee emails as a data source. Does this create a privacy issue in regard to the GDPR?

In most cases, the answer is “no.” Let’s explore these misconceptions and explain how companies can stay compliant with global privacy laws while still using AI/ML workplace technologies to provide coaching and empowerment solutions to their employees.

Analyzing employee data with AI/ML isn’t unique to HR

First of all, many appliances already analyze digital messages with AI/ML. Many of these are likely already used by your organization and do not ask for consent from every sender for every message they analyze. Antivirus software uses AI/ML to scan incoming messages for viruses, chatbots use it to answer support emails, and email clients themselves use AI/ML to suggest responses to common questions as the user types them or create prompts to schedule meetings.

Applications like Gmail, Office 365 Scheduler, ZenDesk and Norton Antivirus do these tasks all the time. Office 365 Scheduler even analyzes emails using natural language processing to streamline the simple task of scheduling a meeting. Imagine if they had to ask for the user’s permission every time they did this! HR technologies that do something similar are not unique.

Employers also process employee’s personal data without their consent on a daily basis. Consider these tasks: automatically storing employee communications, creating paperwork for employee reviews or disciplinary action, or sending payroll information to government agencies. Employees don’t need to give consent for this. That’s because there’s a different legal basis at work that allows the company to share data in this way.

Companies do not need employee consent in this context

This isn’t an issue because the GDPR offers five alternative legal bases pursuant to which employee personal data can be processed, including the pursuit of the employer’s “legitimate interests.” This concept is intentionally broad and gives organizations flexibility to determine whether its interests are appropriate, regardless of whether these interests are commercial, individual, or broader societal benefits, or even whether the interests are a company’s own or those of a third party.

GDPR regulations single out preventing fraud and direct marketing as two specific purposes where personal data may be processed in pursuit of legitimate interest, but there are many more.

These “legitimate interest” bases give employers grounds to process personal data using AI/ML applications without requiring consent. In fact, employers should avoid relying on consent to process employee’s personal data whenever possible. Employees are almost never in a position to voluntarily or freely give consent due to the imbalance of power inherent in employer-employee relationships, and therefore the consents are often invalid. In all the cases listed above, the employer relies on legitimate interest to process employee data. HR tools fall into the same category and don’t require consent.

A right to control your inbox

We’ve established that employers can process email communication data internally with new HR tools that use AI/ML and be compliant with the GDPR. But should they?

Here is where we move from legal issues to ethical issues. Some companies that value privacy might believe that employees should control their own inbox, even though that’s not a GDPR requirement. That means letting employees grant and revoke permission to the applications that can read their workplace emails (and which have already been approved by the company). This lets the individual control their own data. Other organizations may value the benefits of new tools over employee privacy and may put them in place without employees’ consent.

I have seen some organizations create a middle ground by making these tools available to employees but requiring them to opt in to use them (rather than installing them and giving employees the option to opt out, which puts an extra burden on them to maintain privacy). This can both respect employee’s privacy and allow HR departments to use new technologies to empower individuals if they so choose. This is more important than ever in the new era of widespread work from home where we have an abundance of workplace communication and companies are charting new courses to help their employees thrive in the future of work.

Fully understanding compliance around new AI/ML tools is key to effectively rolling them out. While these solutions can be powerful and may help your employees become more self-aware and better leaders, organizations should fully understand compliance and privacy issues associated with their use in order to roll them out effectively.

OPTIMUSCLOUD: Cost and performance efficiency for cloud-hosted databases

A Purdue University data science and machine learning innovator wants to help organizations and users get the most for their money when it comes to cloud-based databases. Her same technology may help self-driving vehicles operate more safely on the road when latency is the primary concern.

optimuscloud

Somali Chaterji, a Purdue assistant professor of agricultural and biological engineering who directs the Innovatory for Cells and Neural Machines [ICAN], and her team created a technology called OPTIMUSCLOUD.

A benefit for both cloud vendors and customers

The system is designed to help achieve cost and performance efficiency for cloud-hosted databases, rightsizing resources to benefit both the cloud vendors who do not have to aggressively over-provision their cloud-hosted servers for fail-safe operations and to the clients because the data center savings can be passed on them.

“It also may help researchers who are crunching their research data on remote data centers, compounded by the remote working conditions during the pandemic, where throughput is the priority,” Chaterji said. “This technology originated from a desire to increase the throughput of data pipelines to crunch microbiome or metagenomics data.”

This technology works with the three major cloud database providers: Amazon’s AWS, Google Cloud, and Microsoft Azure. Chaterji said it would work with other more specialized cloud providers such as Digital Ocean and FloydHub, with some engineering effort.

It is benchmarked on Amazon’s AWS cloud computing services with the NoSQL technologies Apache Cassandra and Redis.

“Let’s help you get the most bang for your buck by optimizing how you use databases, whether on-premise or cloud-hosted,” Chaterji said. “It is no longer just about computational heavy lifting, but about efficient computation where you use what you need and pay for what you use.”

Handling long-running, dynamic workloads

Chaterji said current cloud technologies using automated decision making often only work for short and repeat tasks and workloads. She said her team created an optimal configuration to handle long-running, dynamic workloads, whether it be workloads from the ubiquitous sensor networks in connected farms or high-performance computing workloads from scientific applications or the current COVID-19 simulations from different parts of the world in a rush to find the cure against the virus.

“Our right-sizing approach is increasingly important with the myriad applications running on the cloud with the diversity of the data and the algorithms required to draw insights from the data and the consequent need to have heterogeneous servers that drastically vary in costs to analyze the data flows,” Chaterji said.

“The prices for on-demand instances on Amazon EC2 vary by more than a factor of five-thousand, depending on the virtual memory instance type you use.”

Chaterji said OPTIMUSCLOUD has numerous applications for databases used in self-driving vehicles (where latency is a priority), healthcare repositories (where throughput is a priority), and IoT infrastructures in farms or factories.

OPTIMUSCLOUD: Using machine learning and data science principles

OPTIMUSCLOUD is a software that is run with the database server. It uses machine learning and data science principles to develop algorithms that help jointly optimize the virtual machine selection and the database management system options.

“Also, in these strange times when both traditionally compute-intensive laboratories such as ours and wet labs are relying on compute storage, such as to run simulations on the spread of COVID-19, throughput of these cloud-hosted VMs is critical and even a slight improvement in utilization can result in huge gains,” Chaterji said.

“Consider that currently, even the best data centers run at lower than 50% utilization and so the costs that are passed down to end-users are hugely inflated.”

“Our system takes a look at the hundreds of options available and determines the best one normalized by the dollar cost,” Chaterji said. “When it comes to cloud databases and computations, you don’t want to buy the whole car when you only need a tire, especially now when every lab needs a tire to cruise.”

Increased attacks and the power of a fully staffed cybersecurity team

The cybersecurity landscape is constantly evolving, and even more so during this time of disruption. According to ISACA’s survey, most respondents believe that their enterprise will be hit by a cyberattack soon – with 53 percent believing it is likely they will experience one in the next 12 months.

cybersecurity hiring and retention

Cyberattacks continuing to increase

The survey found cyberattacks are also continuing to increase, with 32 percent of respondents reporting an increase in the number of attacks relative to a year ago. However, there is a glimmer of hope—the rate at which the attacks increase is continuing to decline over time; last year, just over 39 percent of respondents answered in the same way.

Though while attacks are going up—with the top attack types reported as social engineering (15 percent), advanced persistent threat (10 percent) and ransomware and unpatched systems (9 percent each)—respondents believe that cybercrime remains underreported.

Sixty-two percent of professionals believe that enterprises are failing to report cybercrime, even when they have a legal or contractual obligation to do so.

“These survey results confirm what many cybersecurity professionals have known from for some time and in particular during this health crisis—that attacks have been increasing and are likely to impact their enterprise in the near term,” says Ed Moyle, founding partner, SecurityCurve.

“It also reveals some hard truths our profession needs to face around the need for greater transparency and communication around these attacks.”

Security programs tools

Among the tools used in security programs for fighting these attacks are AI and machine learning solutions, and the survey asked about these for the first time this year. While these options are available to incorporate into security solutions, only 30 percent of those surveyed use these tools as a direct part of their operations capability.

The survey also found that while the number of respondents indicating they are significantly understaffed fell by seven percentage points from last year, a majority of organizations (62 percent) remain understaffed. Understaffed security teams and those struggling to bring on new staff are less confident in their ability to respond to threats.

Only 21 percent of “significantly understaffed” respondents report that they are completely or very confident in their organization’s ability to respond to threats, whereas those who indicated their enterprise was “appropriately staffed” have a 50 percent confidence level.

Cybersecurity hiring and retention

The impact goes even further, with the research finding that enterprises struggling to fill roles experience more attacks, with the length of time it takes to hire being a factor. For example, 35 percent of respondents in enterprises taking three months to hire reported an increase in attacks and 38 percent from those taking six months or more.

Additionally, 42 percent of organizations that are unable to fill open security positions are experiencing more attacks this year.

“Security controls come down to three things—people, process and technology—and this research spotlights just how essential people are to a cybersecurity team,” says Sandy Silk, Director of IT Security Education & Consulting, Harvard University, and ISACA cybersecurity expert.

“It is evident that cybersecurity hiring and retention can have a very real impact on the security of enterprises. Cybersecurity teams need to think differently about talent, including seeking non-traditional candidates with diverse educational levels and experience.”

A math formula could help 5G networks efficiently share communications frequencies

Researchers at the National Institute of Standards and Technology (NIST) have developed a mathematical formula that, computer simulations suggest, could help 5G and other wireless networks select and share communications frequencies about 5,000 times more efficiently than trial-and-error methods. NIST engineer Jason Coder makes mathematical calculations for a machine learning formula that may help 5G and other wireless networks select and share communications frequencies efficiently The novel formula is a form of machine learning that … More

The post A math formula could help 5G networks efficiently share communications frequencies appeared first on Help Net Security.

Open source algorithms for network graph analysis help discover patterns in data

StellarGraph has launched a series of new algorithms for network graph analysis to help discover patterns in data, work with larger data sets and speed up performance while reducing memory usage.

network graph analysis

Problems like fraud and cybercrime are highly complex and involve densely connected data from many sources.

One of the challenges data scientists face when dealing with connected data is how to understand relationships between entities, as opposed to looking at data in silos, to provide a much deeper understanding of the problem.

Tim Pitman, Team Leader StellarGraph Library said solving great challenges required broader context than often allowed by simpler algorithms.

“Capturing data as a network graph enables organizations to understand the full context of problems they’re trying to solve – whether that be law enforcement, understanding genetic diseases or fraud detection. We’ve developed a powerful, intuitive graph machine learning library for data scientists—one that makes the latest research accessible to solve data-driven problems across many industry sectors.”

Lower memory usage and better performance

The version 1.0 release by the team at CSIRO’s Data61 delivers three new algorithms into the library, supporting graph classification and spatio-temporal data, in addition to a new graph data structure that results in significantly lower memory usage and better performance.

The discovery of patterns and knowledge from spatio-temporal data is increasingly important and has far-reaching implications for many real-world phenomena like traffic forecasting, air quality and potentially even movement and contact tracing of infectious disease—problems suited to deep learning frameworks that can learn from data collected across both space and time.

Testing of the new graph classification algorithms included experimenting with training graph neural networks to predict the chemical properties of molecules, advances which could show promise in enabling data scientists and researchers to locate antiviral molecules to fight infections, like COVID-19.

The broad capability and enhanced performance of the library is the culmination of three years’ work to deliver accessible, leading-edge algorithms.

Mr Pitman said, “The new algorithms in this release open up the library to new classes of problems to solve, including fraud detection and road traffic prediction.”We’ve also made the library easier to use and worked to optimize performance allowing our users to work with larger data.”

Network graph analysis implementation

StellarGraph has been used to successfully predict Alzheimer’s genes, deliver advanced human resources analytics, and detect Bitcoin ransomware, and as part of a Data61 study, the technology is currently being used to predict wheat population traits based on genomic markers which could result in improved genomic selection strategies to increase grain yield.

The technology can be applied to network datasets found across industry, government and research fields, and exploration has begun in applying StellarGraph to complex fraud, medical imagery and transport datasets.

Alex Collins, Group Leader Investigative Analytics, CSIRO’s Data61 said, “The challenge for organizations is to get the most value from their data. Using network graph analytics can open new ways to inform high-risk, high-impact decisions.”

What is the impact of AI and ML tools on cybersecurity?

89% of IT professionals believe their company could be doing more to defend against cyberattacks, with 64% admitting they are not sure what AI/ML means – despite increased adoption at a global scale, Webroot reveals.

AI ML tools

The report, which reveals how global IT professionals perceive and utilize these advancing technologies in business, also found that the UK has the highest use of AI/machine learning in its current cyber security tools when compared with USA, Japan, New Zealand and Australia.

The importance of leveraging AI and ML tools

With the UK currently in lockdown to tackle the spread of coronavirus, thousands more people are staying at home to work. This means it’s never been more important for employers to leverage AI and ML tools to maintain cyber resilience.

And, with the average duration of a phishing attack dropping from days to roughly 30 minutes, it’s clear from the results of the report that businesses need to do more to ensure staff are properly educated on how to use the cybersecurity tools at their disposal effectively.

AI ML tools

Matt Aldridge, Principal Solutions Architect, Webroot, said: “It’s clear from these findings that there is still a lot of confusion around artificial intelligence and machine learning, especially in terms of these technologies’ in business cybersecurity, and there is skepticism across all geographies with respect to how much benefit AI/ML brings.

“It’s crucial that businesses improve their understanding in order to realize maximum value. By vetting and partnering with cybersecurity vendors who have long-standing experience using and developing AI/ML, and who can provide expert guidance, we expect businesses will be more likely to achieve the highest levels of cyber resilience and effectively maximize the capabilities of the human analysts on their teams.”

Researchers develop self-healing and self-concealing PUF for hardware security

A team of researchers from the National University of Singapore (NUS) has developed a novel technique that allows Physically Unclonable Functions (PUFs) to produce more secure, unique ‘fingerprint’ outputs at a very low cost. This achievement enhances the level of hardware security even in low-end systems on chips. NUS researchers Prof Massimo Alioto (left) and Mr Sachin Taneja (right) testing the self-healing and self-concealing PUF for hardware security Traditionally, PUFs are embedded in several commercial … More

The post Researchers develop self-healing and self-concealing PUF for hardware security appeared first on Help Net Security.