Today malware evolves very fast. Loaders, stealers, and different types of ransomware change so quickly, so it’s become a real challenge to keep up with them. Along with that analysis of them becomes harder and more time-consuming. But cybersecurity specialists can’t waste their time, waiting can cause serious damage. So, how to avoid all of that and speed up malware analysis? Let’s find out.
The goal of malware analysis is to research a malicious sample: its functions, origin, and possible effects on the infected system. This data allows analysts to detect malware, react to the attack effectively, and enhance security.
Generally, there are two ways of how to perform malware analysis:
- Static analysis: get information about a malicious program without running, just having a look at it. With this approach, you can investigate content data, patterns, attributes, and artifacts. However, it’s very hard to work with any advanced malware using only static analysis.
- Dynamic analysis: examine malware while executing it on hardware or, more frequently, in a sandbox, and then try to figure out its functionality. The great advantage here is that the virtual machine allows you to research malicious files completely safe for your system.
The main part of the dynamic analysis is to use a sandbox. It is a tool for executing suspicious programs from untrusted sources in a safe environment for the host machine. There are different approaches to the analysis in sandboxes. They can be automated or interactive.
Online automated sandboxes allow you to upload a sample and get a report about its behavior. This is a good solution especially compared to assembling and configuring a separate machine for these needs. Unfortunately, modern malicious programs can understand whether they are run on a virtual machine or a real computer. They require users to be active during execution. And you need to deploy your own virtual environment, install operation systems, and set software needed for dynamic analysis to intercept traffic, monitor file changes, etc.
Moreover, changing settings to every file takes a lot of time and anyway, you can’t affect it directly. We should keep in mind that analysis doesn’t always follow the line and things may not work out as planned for this very sample. Finally, it’s lacking the speed we need, as we have to wait up to half an hour for the whole cycle of analysis to finish. All of these cons may cause damage to the security if an unusual sample remains undetected. Thankfully, now we have interactive sandboxes.
With ANY.RUN, you can detect, analyze, and monitor threats. And one of its main advantages is that malware can be tricked into executing as if it is launched on a real machine. A user can influence the simulation and interact with the virtual environment: click a mouse, input data, reboot the system, open files, etc. You receive initial results straight after a task is run. One or two minutes are usually enough to complete the research after the end of a task. You may also collect Indicators of Сompromise (IOCs), information that helps to detect a threat in the network. Cybersecurity specialists can use IOCs to identify which malicious program has got into the system, or analyze samples and collect data to protect organizations from possible attacks.
Fast analysis with ANY.RUN
ANY.RUN users can upload their research publicly, so their tasks are available for your research in the public submissions. It’s a huge database of fresh malware samples and completed reports. More than 8000 uploads are performed here daily. You can also use them to speed up your analysis. The simplest way is to make a hash sum search there and it is possible that the sample has already been investigated.
Let’s analyze one of the submissions to see the fast analysis in action.
By looking at the process tree we can see the EXCEL.EXE process running and just after a couple of seconds the EQNEDT32.EXE starts execution. It means that exploitation of the Microsoft equation editor vulnerability (CVE-2017-11882) was used, which shows us that the sample is malicious. After the exploitation, the EQNEDT32.EXE process is downloading and starting the executable file from the Command & Control server. Thanks to the easy-to-understand GUI, we can tell that the analyzed sample is malicious just 3 seconds after the task is started, so we can complete the analysis within minutes.
And as we can notice from the picture below, 14 seconds are more than enough to get the malware family detected by the network’s Suricata rules. Note that this sample is also detected by local signatures after it creates files and writes into the registry. The real-time analysis starts a few seconds after the task is launched. Once the Excel file is opened, the infection process starts. In the virtual machine, we have an opportunity to react to it and maybe trigger the possible malware to act.
The RegAsm.exe system process is injected, then it steals personal data, drops applications, and changes the autorun value in the registry. Moreover, Lokibot is detected. We also know from “HTTP Requests” that EQNEDT32.EXE downloads the main payload from the following URL http://22.214.171.124. In the “Connections” field we can find out that RegAsm.exe connects with microdots.in.
Our task is still running and we’ve already collected a lot of data. But if something seems a little off, the executed file or maldoc may not have worked out. You can relaunch the task with new configurations: pick a different system’s locale, run it with Tor, or choose another OS. And you may get a completely new outcome within a couple of minutes.
After that, you can spend more time performing a more comprehensive analysis with the help of the MITRE ATT&CK matrix, the process graph. You can work on one sample jointly, save or share your task with colleagues using various types of reports.
Sometimes you don’t need to wait until local or network signatures detect malware family – you can determine it by yourself in no time! For example, Wshrat requests payload by the POST method and names itself in this process 21 seconds after the following task is started.
Here’s another situation, your sample doesn’t run in the chosen environment. For example, the malware checks the locale and doesn’t start the execution if the language doesn’t meet the criteria. In the task below a malicious document checks the language in the operational system and Microsoft Office. The malware runs only if the locale’s Italian (en-US / it-IT).
In a virtual machine, you need to perform many manual steps to add additional language both in the OS and Office. But with ANY.RUN you just restart your task with a different locale – pretty time saving: a couple of seconds instead of minutes and hours!
This isn’t the only example — malware can check the environment, work in 64-bit systems, be geofenced, etc. And all you need to do is to expand your analysis in a couple of mouse clicks and don’t waste hours on creating new virtual environments, making snapshots, downloading software, and reading manuals. Save your time!
Cybersecurity professionals need to evaluate threats fast and respond efficiently, before damage occurs. Since all basic functionality of ANY.RUN is free, you can try it out and see how it can save your time and speed up malware analysis.
As the number of data breaches shows no signs of decreasing, the clamor to replace passwords with biometric authentication continues to grow. Biometrics are becoming widely incorporated to secure organizations from unauthorized access and the growing appeal of these security solutions is expected to create a market worth $41.8 billion by 2023, according to MarketsandMarkets.
Password reuse is the fundamental reason why data breaches continue to happen. In recent years biometrics have increasingly been lauded as a superior authentication solution to passwords. However, biometrics are not immune from problems and once you look under the hood, they bring their own set of challenges.
There are several flaws, including one with potentially fatal implications, that organizations can’t and shouldn’t ignore when exploring biometric authentication. These include:
1. Biometrics are forever
This is the Achilles heel: once a biometric is exposed/compromised, you can’t replace it. There is no way to refresh or update your fingerprint, your retina, or your face. Therefore, if a user’s biometric information is exposed, then any account using this authentication method is at risk, and there is no way to reverse the damage.
Biometrics are on display, leaving them open to potential exploitation. For example, facial information can be obtained online or through a photo of someone, unlike passwords, which remain private unless stolen. With a detailed enough representation of a biometric marker, it’s possible to spoof it and, with the rise of deep-fake technology, it will become even easier to spoof biometrics.
As biometrics are forever, it’s vital that organizations make it as difficult as possible for hackers to crack the algorithm if there is a breach. They can do it by using a strong hashing algorithm and not storing any data in plain text.
2. Device/service limitations
Despite the ubiquity of devices with biometric scanners and the number of apps that support biometric authentication, many devices can’t incorporate the technology. While biometrics are commonplace in smart devices, this is not the case with many desktop or laptop computers, which still don’t include biometric readers. Also, when it comes to signing into websites via a browser, the use of biometric authentication is currently extremely limited. Therefore, until every device and browser is compatible, relying solely on biometric authentication is not even a possibility.
The most widespread consumer-oriented biometric authentication approaches (Apple’s TouchID/FaceID and the Android equivalents) are essentially client-side only – acting as a key that unlocks a locally stored set of authentication credentials for the target application or service.
While this approach works well for this use case and has the advantage of not storing sensitive biometric signatures on servers, it precludes the possibility of having this be the only authentication mechanism (i.e., if I try to access the service from a different device, I’ll have to re-authenticate using credentials such as a username and password before I can re-enable biometric authentication, assuming the new device even supports it). To truly have a biometric-first (or biometric-only) authentication approach, you need a different model – one where the biometric signature is stored server-side.
3. Spoofing threats
Another concern with biometric authentication systems is that the scanner devices have shown they are susceptible to spoofing. Hackers have succeeded in making scanners recognize fingerprints by using casts, molds, or otherwise replicas of valid user fingerprints or faces. Although liveness detection has come a long way, it is still far from perfect. Until spoof detection becomes more sophisticated, this risk will remain.
4. Biometric changes
The possibility of changes to users’ biometrics (injury to or loss of a fingerprint for instance, or a disfiguring injury to the face) is another potential issue, especially in the case where biometric authentication is the only authentication method in use and there is no fallback available.
If a breach happens due to biometric authentication, once a cybercriminal gains access, they can then change the logins for these accounts and lock the legitimate user out of their account. This puts the onus on organizations to alert users to take immediate action to mitigate the risk. If there is a breach, both enterprises and users should immediately turn off biometrics on their devices and revert back to the default, usually passwords or passcodes.
Adopting a layered approach to authentication
Rather than searching for a magic bullet for authentication, organizations need to embrace a layered approach to security. In the physical world, you would never rely solely on one solution and in the digital world, you should adopt the same philosophy. In addition to this layered approach, organizations should focus on hardening every element to shore up their digital defenses.
The simplicity and convenience of biometrics will ensure that it continues to be an appealing option for both enterprises and users. However, relying solely on biometric authentication is a high-risk strategy due to the limitations outlined above. Instead, organizations should deploy biometrics selectively as part of the overall identity management strategy, but they must include other security elements to mitigate the potential risks. It’s clear that, despite the buzz, 2021 will not be the year that biometrics replace passwords.
Love them or loathe them, passwords will remain a fixture in our digital lives.
As we near 2021, it seems that the changes to our working life that came about in 2020 are set to remain. Businesses are transforming as companies continue to embrace remote working practices to adhere to government guidelines. What does the next year hold for organizations as they continue to adapt in the age of the Everywhere Enterprise?
We will see the rush to the cloud continue
The pandemic saw more companies than ever move to the cloud as they sought collaboration and productivity tools for employee bases working from home. We expect that surge to continue as more companies realize the importance of the cloud in 2021. Businesses are prepared to preserve these new working models in the long term, some perhaps permanently: Google urged employees to continue working from home until at least next July and Twitter stated employees can work from home forever if they prefer.
Workforces around the world need to continue using alternatives to physical face-to-face meetings and remote collaboration tools will help. Cloud-based tools are perfect for that kind of functionality, which is partly why many customers that are not in the cloud, want to be. The customers who already started the cloud migration journey are also moving more resources to public cloud infrastructure.
People will be the new perimeter
While people will eventually return to the office, they won’t do so full-time, and they won’t return in droves. This shift will close the circle on a long trend that has been building since the mid-2000s: the dissolution of the network perimeter. The network and the devices that defined its perimeter will become even less special from a cybersecurity standpoint.
Instead, people will become the new perimeter. Their identity will define what they’re allowed to access, both inside and outside the corporate network. Even when they are logged into the network, they will have minimal access to resources until they and the device they are using have been authenticated and authorized. This approach, known as zero trust networking, will pervade everything, covering not just employees, but customers, contractors, and other business partners.
User experience will be increasingly important in remote working
Happy, productive workers are even more important during a pandemic. Especially as on average, employees are working three hours longer since the pandemic started, disrupting the work-life balance. It’s up to employers to focus on the user experience and make workers’ lives as easy as possible.
When the COVID-19 lockdown began, companies coped by expanding their remote VPN usage. That got them through the immediate crisis, but it was far from ideal. On-premises VPN appliances suffered a capacity crunch as they struggled to scale, creating performance issues, and users found themselves dealing with cumbersome VPN clients and log-ins. It worked for a few months, but as employees settle in to continue working from home in 2021, IT departments must concentrate on building a better remote user experience.
Old-school remote access mechanisms will fade away
This focus on the user experience will change the way that people access computing resources. In the old model, companies used a full VPN to tunnel all traffic via the enterprise network. This introduced latency issues, especially when accessing applications in the cloud because it meant routing all traffic back through the enterprise data center.
It’s time to stop routing cloud sessions through the enterprise network. Instead, companies should allow remote workers to access them directly. That means either sanitizing traffic on the device itself or in the cloud.
User authentication improvements
Part of that new approach to authentication involves better user verification. That will come in two parts. First, it’s time to ditch the password. The cybersecurity community has advocated this for a long time, but the work-from-home trend will accelerate it. Employees accessing from mobile devices are increasingly using biometric authentication, which is more secure and convenient.
The second improvement to user verification will see people logging into applications less often. Sessions will persist for longer, based on deep agent-based device knowledge that will form a big part of the remote access experience.
Changing customer interactions will require better mobile security
It isn’t just employees who will need better mobile security. Businesses will change the way that they interact with customers too. We can expect fewer person-to-person interactions in retail as social distancing rules continue. Instead, contact-free transactions will become more important and businesses will move to self-checkout options. Retailers must focus more on mobile devices for everything from browsing products, to ordering and payment.
The increase in QR codes presents a great threat
Retailers and other companies are already starting and will continue to use QR codes more and more to bridge contact with things like menus and payment systems, as well as comply with social distance rules. Users can scan them from two meters away, making them perfect for payments and product information.
The problem is that they were never designed for these applications or digital authentication and can easily be replaced with malicious codes that manipulate smartphones in unexpected and damaging ways. We can expect to see QR code fraud problems increase as the usage of these codes expands in 2021.
The age of the Everywhere Enterprise
One overarching message came through clearly in our conversations with customers: the enterprise changed for the longer term in 2020, and this will have profound effects in 2021. What began as a rushed reaction during a crisis this year will evolve during the next as the IT department joins HR in rethinking employee relationships in the age of the everywhere enterprise.
If 2020 was the year that businesses fell back on the ropes, 2021 will be the one where they bounce forward, moving from a rushed reaction into a thoughtful, measured response.
For many employees, the COVID-19 pandemic brought about something they dreamed of for years: the possibility to eschew long commutes, business attire and (finally!) work from their home.
Companies were forced to embrace the work-from-home switch and many are now starting to like the cost savings and the possibility to hire employees from a wider, non-localized pool of applicants.
But for IT security teams, the switch meant even more work and struggling finding new ways to keep their organization and their employees secure from an increasing number and frequency of cyber threats.
The pressure to deliver security is on
A recent LogMeIn report has also revealed that the transition to remote work for the majority of businesses has impacted the day-to-day work of IT professionals.
Aside from the expected technical tasks and an increased number of web meetings, over half of them have been forced to spend more time managing IT security threats and developing new security protocols. In fact, the percentage of IT professionals who are now spending 5 to 8 hours per day on IT security rose from 35 in 2019 to 47 in 2020.
“In terms of defensive tactics, the first two months of the pandemic shifted the previous network-centric thinking to endpoint and remote access. Many firms lacking endpoint detection and response or endpoint protection (next-gen AV) sought to roll out these services across their distributed organization. They also focused on IAM and VPN or SDP services,” Mark Sangster, VP and Industry Security Strategist at eSentire, told Help Net Security.
“The other shift moved thinking from BYOD to BYOH: Bring Your Office Home. Firms were faced with the challenge of securing connections from home offices made through consumer-grade networking gear provided by employee ISPs. These systems are not as hardened as commercial-grade internet devices and were often misconfigured or left in factory settings with default administrative credentials and wide-open Wi-Fi services. This effort required IT teams to help non-technical employees harden their home routers, better understand password security and embrace the necessity for multi-factor authentication and VPNs.”
Solving the security puzzle
Companies’ tech priorities have shifted as well, with many increasing spending for security.
But the need to implement new technology, the widening attack surface, and the onslaught of ransomware-wielding gangs have forced some companies to accept the limits of what they can do with in-house IT security staff and technology, and to seek additional assistance from outside detection and response experts.
The threat of ransomware is insidious and be particularly destructive, delivering a potentially fatal blow to some (often smaller) organizations.
“Firms need to understand the risks and prepare with proactive defenses (threat hunting), hot-swappable back-ups and fail-over colocation systems. The real trick is catching unauthorized activity quickly, before criminal groups are able to plant ransomware throughout the organization, steal data and then launch a synchronized attack to cripple the organization. This means being able to monitor VPN traffic (connections) and remote administrative activities to detect unauthorized movement,” Sangster explained.
“Criminal groups steal credentials to then access the business using remote tools. This MO is detectable, but it requires proactive hunting and constant monitoring of these services. We have stopped multiple attacks of this nature. In those cases, the ransom attack was either isolated to a single device (and quickly recovered in less than an hour), or it required coordinate defenses to block remote attacks through remote admin tools like Microsoft RDP or PowerShell. In these cases, machine learning flagged suspicious activity for further investigation by security analysts. This quick response meant dwell time was only minutes and prevented the criminal gang ransomware from metastasizing throughout the organization.”
A rise in consumer digital traffic has corresponded with a rise in fraud attacks, Arkose Labs reveals. As the year progresses and more people than ever are online, historically ‘normal’ online behavioral patterns are no longer applicable and holiday levels of digital traffic continue to occur on a near daily basis.
Fraudsters are exploiting old fraud modeling frameworks that fail to take today’s realities into account, attempting to blend in with trusted traffic and carry out attacks undetected.
“As the world becomes increasingly digital as a result of COVID-19, fraudsters are deploying an alarming volume of attacks, and continually devising new and more sophisticated ways of carrying out their attacks,” said Vanita Pandey, VP of Marketing and Strategy at Arkose Labs.
“The high fraud levels that accompany high traffic volumes are likely here to stay, even after the pandemic ends. It’s crucial that businesses are aware of the top attack trends so that they can be more vigilant than ever to successfully identify and stop fraud over the long-term.”
Bot attacks and credential stuffing skyrocket
Q3 of 2020 saw its highest ever levels of bot attacks. 1.3 billion attacks were detected in total, with 64% occurring on logins and 85% emanating from desktop computers.
Due to the widespread availability of usernames, email addresses and passwords from years of data breaches, as well as easy access to automated tools to carry out attacks at scale, credential stuffing emerged as a main driver of attack traffic. 770 million automated credential stuffing attacks were detected and stopped by Arkose Labs in Q3.
For ecommerce, every day is Black Friday
The rise in digital traffic for most of 2020 means businesses have been dealing with holiday season levels of traffic since March. With every day now resembling Black Friday, some retailers are better equipped to handle the onslaught of holiday season traffic and fraud.
However, it remains to be seen if a holiday sales bump will occur this year, given already record high traffic levels for many ecommerce businesses.
While much of 2019 saw a marked shift from automated attacks to human sweatshop-driven attacks, automated attacks dominated much of 2020, with Q3 seeing a particularly high spike. This trend is likely to revert back to more targeted attacks in Q4, as during the holiday shopping season fraudsters typically employ low-cost attackers to commit attacks that require human nuance and intelligence.
Europe emerges as the top attacking region
Nearly half of all attacks in Q3 of 2020 originated from Europe, with over 10 million sweatshop attacks coming from Russia and 7 million coming from the United Kingdom.
Many European countries, such as the United Kingdom, France, Italy and Germany, are among those whose GDP shrunk the most since the global pandemic began. A surge in attacks from nations suffering the biggest dips in economic output highlights the economic drivers that spur fraud.
Pandey said, “COVID-19 has sent the world into turmoil, upending digital traffic patterns and introducing long-lasting consequences. Habits formed during 2020 – namely conducting commerce, school, work and even socializing entirely online – will be difficult to let go of, so fraud teams must be capable of quickly cutting through digital traffic noise and spotting even the most subtle signs of attacks. In particular, using targeted friction to deter malicious activity will be key in the months and years ahead.”
A majority of audit and risk professionals believe the risk environment will continue to be dynamic and unpredictable in 2021, rather than returning to more stable pre-pandemic conditions, an AuditBoard survey finds.
The top risk they cited for the coming year was of “economic conditions impacting growth,” followed closely by “cybersecurity threats.”
The responses also illustrate the long-term changes audit and risk professionals will experience in their roles as a result of the pandemic, and how crucial those individuals will be in helping organizations overcome risk challenges despite gaps in enterprise risk management (ERM) programs.
A permanent shift to remote work
One of the biggest challenges the COVID-19 pandemic has created for audit, risk, and compliance professionals is the sudden shift to remote work. Performing audit and risk management tasks in a remote environment is a significant challenge without the aid of modern, collaborative technology.
Recent Institute of Internal Auditors (IIA) polls suggest roughly three-quarters of audit teams are without a modern audit technology solution today. However, when asked by AuditBoard about the future of work, 59% of respondents said they expect their team will work remotely for all or part of the workweek once quarantines lift.
7.5% said they expect their team will work 100% remote on a permanent basis. This shift to remote work presents a major operational challenge for audit, risk, and compliance teams.
“Conditions this year have changed drastically due to the pandemic, and audit, risk, and compliance organizations have had to act quickly to adapt to the dynamic risk environment while maintaining operational continuity,” said John Reese, SVP of Marketing, AuditBoard.
“AuditBoard survey responses overwhelmingly showcase how quickly the workplace mindset is shifting, and how important modern audit, risk, and compliance technology has become to support a more remote and connected future.”
Businesses face dynamic risk environment
Respondents were asked questions about the risks their businesses face as a result of the pandemic and looking forward. Responses reveal an evolving risk landscape with a variety of different business priorities.
- 81% of respondents said “risk will continue to be dynamic and unpredictable” in 2021 and beyond.
- When asked what they see as the most pressing risk facing their businesses in 2021, 27.6% of respondents said, “economic conditions impacting growth,” more than one-quarter (27%) said, “cybersecurity threats,” and 12.8% said “business continuity and crisis response.”
“Audit and risk professionals expect the 2021 business risk environment to be unpredictable,” continued Reese.
“Specifically, they are most concerned with the potential risk of economic conditions, cybersecurity threats, and business continuity as their organizations are faced with a fast-changing external environment. Technology like AuditBoard will be a crucial enabler as organizations strive to understand and manage these risks at scale, and stay a step ahead.”
Amid changing strategies, risk management programs often lacking
The pandemic has shifted risk management strategies for most organizations, but many organizations still lack a mature ERM program.
- 79.5% have either made moderate changes (43.1%), redirected strategy in certain areas (29.3%), or made significant broad-ranging changes (7.1%) to their risk management program since the start of the pandemic.
- Despite these measures for managing the changing risk landscape, just 16.1% reported having a “robust ERM program” that impacts daily decision making and internal audit planning.
Audit teams becoming a core part of business response to risk
Responses from survey questions directed specifically at audit attendees show how auditors are becoming an increasingly relied-upon asset for organizations as they navigate these risks.
- 55% replied that they agree or strongly agree that internal audit teams are involved with discussions of risk and potential responses to the crisis.
- The same sample of respondents was also asked how COVID-19 will change communications between audit teams and the rest of the organization. 44% said that communications with audit committees will increase moving forward.
- In a separate conference session, 84.3% of respondents replied that they are somewhat or very likely to expand risk assessment to new areas or processes and add new controls to mitigate additional risks as a result of the pandemic.
On Blockchain Voting
Blockchain voting is a spectacularly dumb idea for a whole bunch of reasons. I have generally quoted Matt Blaze:
Why is blockchain voting a dumb idea? Glad you asked.
- It doesn’t solve any problems civil elections actually have.
- It’s basically incompatible with “software independence”, considered an essential property.
- It can make ballot secrecy difficult or impossible.
I’ve also quoted this XKCD cartoon.
But now I have this excellent paper from MIT researchers:
“Going from Bad to Worse: From Internet Voting to Blockchain Voting”
Sunoo Park, Michael Specter, Neha Narula, and Ronald L. Rivest
Abstract: Voters are understandably concerned about election security. News reports of possible election interference by foreign powers, of unauthorized voting, of voter disenfranchisement, and of technological failures call into question the integrity of elections worldwide.This article examines the suggestions that “voting over the Internet” or “voting on the blockchain” would increase election security, and finds such claims to be wanting and misleading. While current election systems are far from perfect, Internet- and blockchain-based voting would greatly increase the risk of undetectable, nation-scale election failures.Online voting may seem appealing: voting from a computer or smart phone may seem convenient and accessible. However, studies have been inconclusive, showing that online voting may have little to no effect on turnout in practice, and it may even increase disenfranchisement. More importantly: given the current state of computer security, any turnout increase derived from with Internet- or blockchain-based voting would come at the cost of losing meaningful assurance that votes have been counted as they were cast, and not undetectably altered or discarded. This state of affairs will continue as long as standard tactics such as malware, zero days, and denial-of-service attacks continue to be effective.This article analyzes and systematizes prior research on the security risks of online and electronic voting, and show that not only do these risks persist in blockchain-based voting systems, but blockchains may introduce additional problems for voting systems. Finally, we suggest questions for critically assessing security risks of new voting system proposals.
COVID-19 and the subsequent global recession have thrown a wrench into IT spending. Many enterprises have placed new purchases on hold. Gartner recently projected that global spending on IT would drop 8% overall this year — and yet dollars allocated to cloud-based services are still expected to rise by approximately 19 percent, bucking that downward trend.
Underscoring the relative health of the cloud market, IDC reported that all growth in traditional tech spending will be driven by four platforms over the next five years: cloud, mobile, social and big data/analytics. Their 2020-2023 forecast states that traditional software continues to represent a major contribution to productivity, while investments in mobile and cloud hardware have created new platforms which will enable the rapid deployment of new software tools and applications.
With entire workforces suddenly going remote all over the world, there certainly are a number of specific business problems that need to be addressed, and many of the big issues involve VPNs.
Assault on VPNs
Millions of employees are working from home, and they all have to securely access their corporate networks. The vast majority of enterprises still rely on on-premises servers to some degree (estimates range from 60% to 98%), therefore VPNs play a vital role in enabling that employee connection to the network. This comes at a cost, though: bandwidth is gobbled up, slowing network performance — sometimes to a crippling level — and this has repercussions.
Maintenance of the thousands of machines and devices connected to the network gets sacrificed. The deployment of software, updates and patches simply doesn’t happen with the same regularity as when everyone works on-site. One reason for this is that content distribution (patches, applications and other updates) can take up much-needed bandwidth, and as a result, system hygiene gets sacrificed for the sake of keeping employees productive.
Putting off endpoint management, however, exposes corporate networks to enormous risks. Bad actors are well aware that endpoints are not being maintained at the same level as pre-pandemic, and they are more than willing to take advantage. Recent stats show that the volume of cyberattacks today is pretty staggering — much higher than prior to COVID-19.
Get thee to the cloud: Acceleration of modern device management
Because of bandwidth concerns, the pressure to trim costs, and the need to maintain machines in new ways, many enterprises are accelerating their move to the cloud. The cloud offers a lot of advantages for distributed workforces while also reducing costs. But digital transformation and the move to modern device management can’t happen overnight.
Enterprises have invested too much time, money, physical space and human resources to just walk away. Not to mention, on-premises environments have been highly reliable. Physical servers are one of the few things IT teams can count on to just work as intended these days.
Hybrid environments offer a happy medium. With the latest technology, enterprises can begin migrating to the cloud and adapt to changing conditions, meeting the needs of distributed teams. They can also save some money in the process. At the same time, they don’t have to completely abandon their tried-and-true servers.
Solving specific business problems: Content distribution to keep systems running
But what about those “specific business problems,” such as endpoint management and content distribution? Prior to COVID-19, this had been one of the biggest hurdles to digital transformation. It was not possible to distribute software and updates at scale without negatively impacting business processes and without excessive cost.
The issue escalated with the shift to remote work. Fortunately, technology providers have responded, developing solutions that leverage secure and efficient delivery mechanisms, such as peer-to-peer content distribution, that can work in the cloud. Even in legacy environments, vast improvements have been made to reduce bandwidth consumption.
These solutions allow enterprises to transition from a traditional on-premises infrastructure to the cloud and modern device management at their own speed, making their company more agile and resilient to the numerous risks they encounter today. Breakthrough technologies also support multiple system management platforms and help guarantee endpoints stay secure and updated even if corporate networks go down – something that, given the world we live in today, is a very real possibility.
Companies like Garmin and organizations such as the University of California San Francisco joined the unwitting victims of ransomware attacks in recent months. Their systems were seized, only to be released upon payment of millions of dollars.
While there is the obvious hard cost involved, there are severe operational costs as well — employees that can’t get on the network to do their jobs, systems must be scanned, updated and remediated to ensure the network isn’t further compromised, etc. A lot has to happen within a short period of time in the wake of a cyberattack to get people back to work as quickly and safely as possible.
Fortunately, with modern cloud-based content distribution solutions, all that is needed for systems to stay up is electricity and an internet connection. Massive redundancy is being built into the design of products to provide extreme resilience and help ensure business continuity in case part or all of the corporate network goes down.
The newest highly scalable, cloud-enabled content distribution options enable integration with products like Azure CDN and Azure Storage and also provide a single agent for migration to modern device management. With features like cloud integration, internet P2P, and predictive bandwidth harvesting, enterprises can leverage a massive amount of bandwidth from the internet to manage endpoints and ensure they always stay updated and secure.
Given these new developments precipitated and accelerated by COVID-19, as well as the clear, essential business problem these solutions address, expect to see movement and growth in the cloud sector. Expect to see an acceleration of modern device management, and despite IT spending cuts, expect to see a better, more secure and reliable, cost efficient, operationally efficient enterprise in the days to come.
Seventy-three percent of health system, hospital and physician organizations report their infrastructures are unprepared to respond to attacks. The survey results estimated 1500 healthcare providers are vulnerable to data breaches of 500 or more records, representing a 300 percent increase over this year.
Black Book Market Research surveyed 2,464 security professionals from 705 provider organizations to identify gaps, vulnerabilities and deficiencies that persist in keeping hospitals and physicians proverbial sitting ducks for data breaches and cyberattacks.
Ninety-six percent of IT professionals agreed with the sentiments that data attackers are outpacing their medical enterprises, holding providers at a disadvantage in responding to vulnerabilities.
With the healthcare industry estimated to spend $134 billion on cybersecurity from 2021 to 2026, $18 billion in 2021, increasing 20% each year to nearly $37 billion in 2026, 82% of CIOs and CISOs in health systems in Q3 2020 agree that the dollars spent currently have not been allocated prior to their tenure effectively, often only spent after breaches, and without a full gap assessment of capabilities led by senior management outside of IT.
Talent shortage for cybersecurity pros continues
Additionally, 291 healthcare industry human resources executives were surveyed to determine the organizational supply and demand of experienced cybersecurity candidates. On average, cybersecurity roles in health systems take 70% longer to fill than other IT jobs.
Health systems are struggling to find workers that request cybersecurity-related skills as vacancy duration as reported by survey HR respondents average about 118 days to fill positions, nearly three times as high as the national average for other industries.
“The talent shortage for cybersecurity experts with healthcare expertise is nearing a very perilous position,” said Brian Locastro, lead researcher on the 2020 State of the Healthcare Cybersecurity Industry study by Black Book Research.
Seventy-five percent of the sixty-six-health system CISOs responding agreed that experienced cybersecurity professionals are unlikely to choose a healthcare industry career path because of one main reason.
More than in other industries, healthcare CISOs are ultimately held responsible for a data breach and the financial and reputation impacts to the provider organization despite having extremely limited decision-making technology or policy making authority.
COVID-19 has greatly increased risk of data breaches
Healthcare cybersecurity has become more complicated as providers are forced to deal with the COVID-19 pandemic. Understaffed and underfunded IT security departments are scrambling to accommodate the surge in demand of remote services from patients and physicians while simultaneously responding to the surge in security risks.
The survey found 90% of health systems and hospital employees who shifted to working at home due to the pandemic, did not receive any updated guidelines or training on the increasing risk of accessing sensitive patient data compromising systems
“Despite the rising threat, the vast majority of hospitals and physicians are unprepared to handle cybersecurity threats, even though they pose a major public health problem,” said Locastro.
Forty percent of all clinical hospital employees receive little or no cybersecurity awareness training still in 2020, beyond initial education on log in access.
Fifty-nine percent of health system CIOs surveyed are shifting security strategies to address user authentication and access as malicious incidents and hackers are the 2020 attacker’s go-to entry point of choice for health systems.
Stolen and compromised credentials were ongoing issues for 53% of health systems surveyed as hackers are increasingly using cloud misconfigurations to breach networks.
Cybersecurity consulting and advisory services are in high demand
Sixty-nine percent of 219 C-Suite respondents state their health system’s budget for cybersecurity consulting is increasing in 2021 to assess gaps, secure network operations, and user security on-premises and in the cloud.
“In today’s highly competitive cybersecurity market there isn’t enough talent to staff hospitals and health systems,” said Locastro.
“As provider organizations struggle with recruit, hire and retain in house staff, the plausible choice is retaining an experienced advisory firm that is capable of identifying and remediating hidden security vulnerabilities, which appeals to the strategic and economic sense of boards and CEOs.”
Healthcare cybersecurity challenges find resolutions from outsourced services
“The dilemma with cybersecurity budgeting and forecasting is the lack of reliable historical data,” said Locastro. “Cybersecurity is a newer line item for hospitals and physician enterprises and budgets have not evolved to cover the true scope of human capital and technology requirements yet.”
That shortage of healthcare cybersecurity professionals and a lack of appropriate technology solutions implemented is forcing a rush to acquire services and outsourcing at a pace five times more than the acquisition of cybersecurity products and software solutions.
Cybersecurity companies are responding to the labor crunch by offering healthcare providers and hospitals with a growing portfolio of managed services.
“The key place to start when choosing a cybersecurity services vendor is to understand your threat landscape, understanding the type of services vendors offer and comparing that to your organization’s risk framework to select your best-suited vendor,” said Locastro.
“Healthcare organizations are also more prone to attacks than other industries because they persist at managing through breaches reactively.”
Fifty-one percent of in-house IT management respondents with purchasing authority report their group is e not aware of the full variety of cybersecurity solution sets that exist, particularly mobile security environments, intrusion detection, attack prevention, forensics and testing in various healthcare settings.
Cybersecurity in healthcare provider organizations remains underfunded
The amount of dollars that are actually spent on healthcare industry cybersecurity products and services are increasing, averaging 21% year over year since 2017. Extended estimates have estimated nearly $140 Billion will be spent by health systems and health insurers by 2026.
However, 82% of hospital CIOs in inpatient facilities under 150 staffed beds and 90% of practice administrators collectively state they are not even close to spending an adequate amount on protecting patient records from a data breach.
“Outdated IT systems, fewer cybersecurity protocols, untrained IT staff on evolving security skills, and data-rich patient files are making healthcare the current target of hacker attacks,” said Locastro. “And the willingness of hospitals and physician practices to pay high ransoms to regain their data quickly motivates hackers to focus on patient records.”
“Threats are now four times more likely to be centered on healthcare than any other industry, and ransomware attacks are increasing in popularity because of the amount of privileged information the hacker can obtain,” said Locastro.
“Providers at the point-of-care haven’t kept pace with the cybersecurity progress and tools that manufacturers, IT software vendors, and the FDA have made either.”
Healthcare consumers willing to change providers if patient privacy was comprised
Eighty percent of healthcare organization have not had a cybersecurity drill with an incident response process, despite the skyrocketing cases of data breaches in the healthcare industry in 2020.
Only 14 percent of hospitals and six percent of physician organizations believe that a 2021 assessment of their cybersecurity will show improvement from 2020. Twenty-six percent of provider organizations believe their cybersecurity position has worsened, as compared to three percent in other industries, year-to-year.
“Medical and financial leaders have wielded more influence over organizational budgets and made it difficult for IT management to implement needed cybersecurity practices despite the existing environment, but now consumers are beginning to react negatively to the provider’s lack of protection solutions.”
A poll of 3,500 healthcare consumers that used medical or hospital services in the last eighteen months revealed 93% would leave their provider if their patient privacy was comprised in an attack that could have been prevented.
Researchers at the University of Birmingham have managed to break Intel SGX, a set of security functions used by Intel processors, by creating a $30 device to control CPU voltage.
Break Intel SGX
The work follows a 2019 project, in which an international team of researchers demonstrated how to break Intel’s security guarantees using software undervolting. This attack, called Plundervolt, used undervolting to induce faults and recover secrets from Intel’s secure enclaves.
Intel fixed this vulnerability in late 2019 by removing the ability to undervolt from software with microcode and BIOS updates.
Taking advantage of a separate voltage regulator chip
But now, a team in the University’s School of Computer Science has created a $30 device, called VoltPillager, to control the CPU’s voltage – thus side-stepping Intel’s fix. The attack requires physical access to the computer hardware – which is a relevant threat for SGX enclaves that are often assumed to protect against a malicious cloud operator.
The bill of materials for building VoltPillager is:
- Teensy 4.0 Development Board: $22
- Bus Driver/ Buffer * 2: $1
- SOT IC Adapter * 2: $13 for 6
How to build Voltpillager Board
This research takes advantage of the fact that there is a separate voltage regulator chip to control the CPU voltage. VoltPillager connects to this unprotected interface and precisely controls the voltage. The research show that this hardware undervolting can achieve the same (and more) as Plundervolt.
Zitai Chen, a PhD student in Computer Security at the University of Birmingham, says: “This weakness allows an attacker, if they have control of the hardware, to breach SGX security. Perhaps it might now be time to rethink the threat model of SGX. Can it really protect against malicious insiders or cloud providers?”
While COVID-19 has created new concerns and deepened traditional challenges for IT, organizations with complete insight and governance of their technology ecosystem are better positioned to achieve their priorities, a Snow Software survey of 1,000 IT leaders and 3,000 workers in the United States, United Kingdom, Germany and Australia reveals.
The challenge of managing risk
In fact, mature technology intelligence – defined as the ability to understand and manage all technology resources – correlated to resilience and growth. Of the IT leaders classified as having mature technology intelligence, 79% were confident in their organization’s ability to weather current events and 100% indicated that innovation continues to be a strategic focus for their organization.
“The complexities, risks and budget concerns IT departments traditionally face have been exacerbated, and a rapid acceleration of digital transformation and cloud adoption has brought new issues to the forefront. Now more than ever, IT leaders need to be in a position to quickly adapt to these macro trends as they define their top technology priorities in 2021.”
Technology management has become increasingly difficult
Many IT leaders indicated increases in technology spend across the board – on software, hardware, SaaS and cloud – over the past 12 months. Faced with more complex ecosystems, it is no surprise that 63% also reported technology management had become more difficult.
As anticipated budget restrictions go into effect for 2021, IT leaders will need to demonstrate the value of their investments and ensure proper governance over their entire technology stack.
Improved employee perception of IT
Employee perception of IT has improved, but differing perceptions on technology management and procurement hint at potential issues. While 41% of workers believe that access to technology has improved, there remains a 22-point gap between IT leaders and employees on how easy it is to purchase software, applications or cloud services.
This is not the only area where IT leaders and workers have varying views. Though they agree that security is the number one issue caused by unmanaged and unaccounted for technology, awareness of additional issues drops dramatically after that, with 16% of workers believing it causes no business issues whatsoever.
The data suggests continued challenges ahead for organizations as they try to reduce risk across the board.
Vendor audits a looming but potentially underestimated risk in 2021
87% of IT leaders said they had been audited by a software vendor over the last 12 months.
The vendors that audited the most were Microsoft, IBM, Oracle, Adobe and SAP. Yet only 51% said they were concerned about audits over the next 12 months, an answer that varied wildly based on geography – 81% of US leaders said they were concerned compared to just 30% in Germany and 42% in the UK.
Based on 2020 trends as well as vendor behavior following the 2008 recession, it appears European IT leaders are significantly underestimating this risk.
Organization’s top IT priorities
Organization’s top IT priorities are inherently at odds with each other and often align with the IT department’s biggest challenges. IT leaders reported that their organization’s top priorities in 2020 were adopting new technologies (38%), reducing security risks (38%), reducing IT spend (38%).
They paralleled the biggest challenges IT leaders faced over the past 12 months with managing cybersecurity threats (43%), implementing new technologies (40%) and supporting remote work (39%). Juggling these conflicting and difficult priorities became even more complicated in light of COVID-19.
Few meeting the bar for mature technology intelligence
Strong technology intelligence enabled IT leaders to more effectively tackle their top priorities and challenges. Just 14% of IT leaders met the bar for mature technology intelligence. This elite group outpaced other respondents in their ability to support digital transformation, reduce risk, enable employees and control spend.
“As we collectively look ahead to 2021, it’s more important than ever that CIOs and IT leaders strike the right balance between managing risk and remaining agile in the face of continued unpredictability,” said Pooley.
“It is clear from the data that a comprehensive understanding of technology resources and the ability to manage them is a key differentiator. IT leaders can use the insights to endure challenging periods like the pandemic, as well as embrace innovation to drive future growth and resilience.”
Although only 33% of organizations are currently using a dedicated digital experience monitoring solution today, nearly half of IT leaders are now likely to invest in these solutions as a result of the events of 2020, a NetMotion survey reveals.
Digital experience monitoring
In addition, the research revealed that tech leaders tend to overestimate the positive experience of remote workers – with IT estimating the quality of the remote working experience to be 21% higher than actual remote workers rated it.
“The past eight months have revealed fundamental blind spots in the way many IT teams have traditionally monitored the digital experiences of remote workers,” said Christopher Kenessey, CEO of NetMotion.
“Digital experience monitoring is emerging as the next crucial addition to IT’s toolbox in today’s remote working world, where IT no longer owns the networks that employees are using. Simply put, our research confirms that IT teams can’t fix what they can’t see.”
Remote work causing more technology issues, IT is hard-pressed to solve them
Since the beginning of COVID-19, nearly 75% of organizations have seen an increase in support tickets from remote workers, with 46% reporting a moderate increase and 29% reporting a large increase in workload, according to the survey. This extra burden is straining already stretched IT teams.
Further, from an IT, tools and technology perspective, 48% of workers prefer the experience of working in the office. That may be because IT has a harder time diagnosing employee tools and technology challenges outside of controlled office settings.
According to the survey, over 25% of IT teams admit struggling to diagnose the root cause of remote worker issues; and ensuring reliable network performance was cited as the top challenge for IT leaders surveyed, with 46% reporting the problem.
Joining these issues, IT leaders listed the following challenges encountered this year:
- Software and application issues (43%)
- Remote worker cybersecurity (43%)
- Hardware performance and configuration (38%)
Strained IT-employee relationship
The survey also revealed that the new remote work dynamic may be straining the IT-employee relationship, with remote workers not fully trusting IT to provide the help they need.
While 45% of remote workers say their IT department values employee feedback, 26% of employees said they didn’t feel that their feedback would change anything, and 29% were undecided.
Furthermore, while 66% of remote workers reported encountering an IT issue while working remotely, many are not sharing their issues with IT. In fact, 58% of remote workers said that they had encountered IT issues while working remotely but did not share them with their IT team, and of the issues they reported to IT, only 46% were actually resolved.
“As everyone has gravitated towards a ‘work from anywhere’ status, IT teams have struggled to support employees. Workers are accessing a wider variety of resources from countless unknown networks, reducing visibility and making it exponentially more difficult for IT to diagnose the root cause of technology failures,” Kenessey said.
“Sadly, our research showed that nearly a quarter of remote workers would rather suffer in silence than engage tech teams. Without dedicated tools to monitor the experience of remote and mobile workers, IT teams are at a disadvantage when diagnosing and resolving technology challenges, and that’s putting greater strain on the IT-business relationship.”
It was an accomplishment for the ages: within just a couple of days, IT departments hurriedly provided millions of newly homebound employees online access to the data and apps they needed to remain productive.
Some employees were handed laptops as they left the building, while others made do with their own machines. Most connected to their corporate services via VPNs. Other companies harnessed the cloud and software and infrastructure services (SaaS, IaaS).
Bravo, IT! Not only did it all work, businesses and employees both saw the very real benefits of remote life, and that egg is not going back into the shell. Many won’t return to those offices and will continue work from home.
But while immediate access challenges were answered, this was not a long-term solution.
Let’s face it, because of the pandemic a lot of companies were caught off guard with insufficient plans for data protection and disaster recovery (DR). That isn’t easy in the best of times, never mind during a pandemic. Even those with effective strategies now must revisit and update them. Employees have insufficient home security. VPNs are difficult to manage and provision, perform poorly and are hard to scale. And, IT’s domain is now stretched across the corporate data center, cloud (often more than one), user endpoints and multiple SaaS providers.
There’s a lot to do. A plan that fully covers DR, data protection and availability is a must.
There are several strategies for protecting endpoints. First off, if employees are using company-issued machines, there are many good mobile machine management products on the market. Sure, setting up clients for a volume of these will be a laborious task, but you’ll have peace of mind knowing data won’t go unprotected.
Another strategy is to create group policies that map the Desktop and My Documents folders directly to the cloud file storage of your choice, no matter if it’s Google Drive, OneDrive, Dropbox or some other solution. That can simplify file data protection but its success hinges on the employee storing documents in the right place. And if they keep them on their desktop, for example, they’re not going to be protected.
And right there is the rub with protecting employee machines – employees are going to store data on these devices. Often, insecure home Internet connections make these devices and data vulnerable. Further, if you add backup clients and/or software to employee-owned machines, you could encounter some privacy resistance.
Remote desktops can provide an elegant solution. We’ve heard “this is the year of virtual desktop infrastructure (VDI)” for over a decade. It’s something of a running joke in IT circles, but you know what? The current scenario could very well make this the year of remote desktops after all.
VDI performance in more sophisticated remote desktop solutions has greatly improved. With a robust platform configured properly, end-users can’t store data on their local machines – it’ll be safely kept behind a firewall with on-premises backup systems to protect and secure it.
Further, IT can set up virtual desktops to prevent cut and paste to the device. And because many solutions don’t require a client, it doesn’t matter what machine an employee uses – just make sure proper credentials are needed for access and include multi-factor authentication.
Pain in the SaaS
As if IT doesn’t have enough to worry about, there’s a potential SaaS issue that can cause a lot of pain. Most providers operate under the shared responsibility model. They secure infrastructure, ensure apps are available and data is safe in case of a large-scale disaster. But long-term, responsibility for granular protection of data rests on the shoulders of the customer.
Unfortunately, many organizations are unprepared. A January 2020 survey from OwnBackup of 2,000 Salesforce users found that 52% are not backing up their Salesforce data.
What happens if someone mistakenly deletes a Microsoft Office 365 document vital for a quarterly sales report and it’s not noticed for a while? Microsoft automatically empties recycle bins data after 30 days, so unless there’s backup in place, it’s gone for good.
Backup vendors provide products to protect data in most of the more common SaaS services, but if there’s not a data protection solution for one your organization is using, make data protection part of the service provider’s contract and insist they regularly send along copies of your data.
When it comes to a significant disaster, highly distributed environments can make recovery difficult. The cloud seems like a clear choice for storing DR and backup data, but while the commodity cloud providers make it easy and cheap to upload data, costs for retrieval are much higher. Also, remember that cloud recovery is different from on-prem, requiring expertise in areas like virtual machines and user access. And, if IT is handling cloud directly and has issues, keep in mind that it could be very difficult getting support.
During a disaster, you want to recover fast; you don’t want to be creating a backup and DR strategy as the leadership grits their teeth due to downtime. So, set your data protection strategy now, be sure each app is included, follow all dependencies and test over and over again. Employees and data may be in varied locations, so be sure you’re completely covered so your company can get back in the game faster.
While IT pulled off an amazing feat handling a rapid remote migration, to ensure your company’s future, you need to be certain it can protect data, even outside of the corporate firewall. With a backup and DR strategy for dispersed data in place, you’ll continue to be in a position to make history, instead of fading away.
Businesses increasingly embrace the moving of multiple applications to the cloud using containers and utilize Kubernetes for orchestration, according to Zettaset.
However, findings also confirm that organizations are inadequately securing the data stored in these new cloud-native environments and continue to leverage existing legacy security technology as a solution.
Businesses are faced with significant IT-related challenges as they strive to keep up with the demands of digital transformation. Now more than ever to maintain a competitive edge, companies are rapidly developing and deploying new applications.
Companies must invest in high performance data protection
The adoption of containers, microservices and Kubernetes for orchestration play a significant role in these digital acceleration efforts. And yet, while many companies are eager to adopt these new cloud-native technologies, research shows that companies are not accurately weighing the benefits of enterprise IT innovation with inherent security risks.
“Our goal with this research was to determine whether enterprise organizations who are actively transitioning from DevOps to DevSecOps are investing in proper security and data protection technology. And while findings confirm that companies are in fact making the strategic decision to shift towards cloud-native environments, they are currently ill-equipped to secure their company’s most critical asset: data.
“Companies must invest in high performance data protection so as it to secure critical information in real-time across any architecture.”
- Organizations are embracing the cloud and cloud-native technologies: 39% of respondents have multiple production applications deployed on Kubernetes. But, companies are still struggling with the complexities associated with these environments and how to secure deployments.
- Cloud providers offer considerable influence with regards to Kubernetes distribution: A little over half of those surveyed are using open source Kubernetes available through the Cloud Native Computing Foundation (CNCF). And 34.7% of respondents are using a Kubernetes offering managed by an existing cloud provider such as AWS, Google, Azure, and IBM.
- Kubernetes security best practices have yet to be identified: 60.1% of respondents believe there is a lack of proper education and awareness of the proper ways to mitigate risk associated with storing data in cloud-native environments. And 43.2% are confident that multiple vulnerable attack surfaces are created with the introduction of Kubernetes.
- Companies have yet to evolve their existing security strategies: Almost half of respondents (46.5%) are using traditional data encryption tools to protect their data stored in Kubernetes clusters. Over 20% are finding that these traditional tools are not performing as desired.
“The results of our research substantiate the notion that enterprise organizations are moving forward with cloud-native technologies such as containers and Kubernetes. What we were most interested in discovering was how these companies are approaching security,” said Charles Kolodgy, security strategist and author of the report.
“Companies overall are concerned about the wide range of potential attack surfaces. They are applying legacy solutions but those are not designed to handle today’s ever-evolving threat landscape, especially as data is being moved off-premise to cloud-based environments.
“To stay ahead of what’s to come, companies must look to solutions purposely built to operate in a Kubernetes environment.”
As the Internet of Things becomes more and more part of our lives, the security of these devices is imperative, especially because attackers have wasted no time and are continuously targeting them.
Chen Ku-Chieh, an IoT cyber security analyst with the Panasonic Cyber Security Lab, is set to talk about the company’s physical honeypot and about the types of malware they managed to discover through it at HITB CyberWeek on Wednesday (October 18).
In the meantime, we had some questions for him:
Global organizations are increasingly experiencing IoT-focused cyberattacks. What is the realistic worst-case scenario when it comes to such attacks?
The use of IoT is increasingly widespread, from home IoT, office IoT to factory IoT, and the use of automation equipment is increasing. Therefore, the most realistic and worst case for IoT is to affect critical infrastructure equipment, such as industrial control systems (ICS), by attacking IIoT devices.
Hackers can affect the operation of ICSes by attacking IIoT, resulting in large-scale damage. Furthermore, protecting medical IoT devices is also important. Hacked pacemakers, insulin pumps, etc. can affect human lives directly.
What are the main challenges when it comes to vulnerability research of IoT devices?
Expanding from IoT devices to IoT systems. The main challenge is that IoT systems consist of various components. Most components have different software/firmware, hardware, etc. The discovery of vulnerabilities in IoT devices requires expertise in many fields – researchers need to know a lot about chips, applications, communication protocols, network protocols, operation systems, cloud services, and so on.
What advice would you give to an enterprise CISO that wants to make sure the connected devices in use in the organization are as secure as possible?
To start, CISOs should check whether the vendors of the products they plan to use care about product security. How do they deal with vulnerabilities? Do they have a PSIRT? Do they have a point of contact for vulnerability reports? And so on.
Once they settle on a product to use, they should make sure that best practices – e.g., safely configuring the device, applying security updates in a timely manner – are part of the internal processes. They should also check the security of the services the devices use, e.g., network services used by an IP camera. Finally, network defenses should be structured to effectively control the access rights of the various networked devices in the environment.
How do you expect the security of IoT devices to evolve in the near future?
As we move forward, governments will attempt to create security baselines with regulations and certifications (labelling schemes). New security standards for various sectors (automotive, aviation – to name a few) will also be created.
As IoT products use similar network security protocols or hardware components, IoT security will no longer be a unilateral effort by the manufacturers. In the future, manufacturers, suppliers of parts, security organizations and governments will cooperate more closely, and even achieve mutual defense alliances to ensure effective and immediate protection.
Nuspire released a report, outlining new cybercriminal activity and tactics, techniques and procedures (TTPs) throughout Q3 2020, with additional insight from Recorded Future.
Threat actors becoming even more ruthless
The report demonstrates threat actors becoming even more ruthless. Throughout Q3, hackers shifted focus from home networks to overburdened public entities, including the education sector and the Election Assistance Commission (EAC). Malware campaigns, like Emotet, utilized these events as phishing lure themes to assist in delivery.
“We continue to see attackers use newsjacking and typosquatting techniques to attack organizations with ransomware, especially this quarter with the Presidential election and schools moving to a virtual learning model,” said John Ayers, Nuspire Chief Strategy Product Officer.
“It’s important for organizations to understand the latest threat landscape is changing so they can better prepare for current themes and better understand their risk.”
Increase in malware activity
There has been a significant increase in malware activity over the course of Q3 2020; the 128% increase from Q2 represents more than 43,000 malware variants detected a day.
As Emotet made a significant appearance, new features in Emotet modules were discovered, implying the group will likely continue operations throughout the remainder of the next quarter to successfully gauge the viability of these new features.
“Keeping a vigilant eye on how threats evolve, grow and adapt over time helps us understand how threat actors have been retooling their tactics. It’s more important than ever to consistently have visibility into the threat landscape.”
- The ZeroAccess botnet made another big appearance in Q3. It resurged in Q2, coming in second for most used botnet, but then went quiet towards the end of Q2, coming back up in Q3.
- Office document phishing skyrocketed during the second half of Q3, which could be due to the upcoming election, or because attackers have just finished retooling.
- Ransomware attack on the automotive industry is on the rise. At the end of Q3 2020, references have already surpassed the 2019 total at 18,307, an increase of 79.15% with Q4 still remaining.
- H-Worm Botnet, also known as Houdini, Dunihi, njRAT, NJw0rm, Wshrat, and Kognito, surged to the top of witnessed Botnet traffic for Q3 from the actors behind the botnet by deploying instances of Remote Access Trojans (RATs) using COVID-19 phishing lures and executable names.
Academics at UCL and other institutions have collaborated to develop a machine learning tool that identifies new domains created to promote false information so that they can be stopped before fake news can be spread through social media and online channels.
To counter the proliferation of false information it is important to move fast, before the creators of the information begin to post and broadcast false information across multiple channels.
How does it work?
Anil R. Doshi, Assistant Professor for the UCL School of Management, and his fellow academics set out to develop an early detection system to highlight domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.
Doshi commented: “Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early.
“By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”
By applying a machine-learning model to domain registration data, the tool was able to correctly identify 92 percent of the false information domains and 96.2 percent of the non-false information domains set up in relation to the 2016 US election before they started operations.
Why should it be used?
The researchers propose that their tool should be used to help regulators, platforms, and policy makers proceed with an escalated process in order to increase monitoring, send warnings or sanction them, and decide ultimately, whether they should be shut down.
The academics behind the research also call for social media companies to invest more effort and money into addressing this problem which is largely facilitated by their platforms.
Doshi continued “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey.
“We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening.
“Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”
The research is ongoing in recognition that the environment is constantly evolving and while the tool works well now, the bad actors will respond to it. This underscores the need for constant and ongoing innovation and research in this area.
Multi-factor authentication (MFA) that depends on one of the authentication factors being delivered via SMS and voice calls should be avoided, Alex Weinert, Director of Identity Security at Microsoft, opined.
That’s not to say that MFA should be avoided, though, just that there are safer and more reliable ways to get additional authentication factors.
Why SMS- and voice-based MFA is the least secure option
Last year, Weinert noted that using any form of MFA is better than relying just on a password for security, as it “significantly increases the costs for attackers, which is why the rate of compromise of accounts using any type of MFA is less than 0.1% of the general population.”
But the delivery of authentication factors via publicly switched telephone networks (PSTN) is the least secure of the MFA methods available, he thinks, because:
- The SMS and voice formats aren’t adaptable to user experience expectations, technical advances, and attacker behavior in real-time
- PSTN systems are not 100% reliable, meaning the message or call may not come when needed
- Changing regulations may get in the way of SMS delivery and phone calls
- SMSes and phone calls were designed without encryption and can be intercepted (e.g., via software-defined radios, femotcells, SS7 intercept services, mobile malware, phishing tools)
- Support agents at companies operating publicly switched telephone networks can be tricked, bribed or coerced by attackers into providing access to the victims’ SMS or voice channel (e.g., via SIM swapping)
MFA is a must
The value of multi-factor authentication is not in question, but as more and more users adopt it, attackers will try come up with new ways to grab the needed OTP authentication codes.
Weinert advised users to, if possible, switch from SMS- and voice-based MFA to using app-based authentication. Naturally, he endorsed the Microsoft Authenticator app, but there are other apps that serve the same function (such as Google Authenticator, Cisco’s Duo Mobile) and the same protections (encrypted communication, more control, etc.).
There are other MFA options available, and some offer an even greater degree of safety against remote attacks, such as smart cards or security keys – actual physical devices attackers should get their hands on in order to gain access to secured accounts.
ESET researchers have discovered ModPipe, a modular backdoor that gives its operators access to sensitive information stored in devices running ORACLE MICROS Restaurant Enterprise Series (RES) 3700 POS (point-of-sale) – a management software suite used by hundreds of thousands of bars, restaurants, hotels and other hospitality establishments worldwide.
The majority of the identified targets were from the United States.
Containing a custom algorithm
What makes the backdoor distinctive are its downloadable modules and their capabilities, as it contains a custom algorithm designed to gather RES 3700 POS database passwords by decrypting them from Windows registry values.
This shows that the backdoor’s authors have deep knowledge of the targeted software and opted for this sophisticated method instead of collecting the data via a simpler yet “louder” approach, such as keylogging.
Exfiltrated credentials allow ModPipe’s operators access to database contents, including various definitions and configuration, status tables and information about POS transactions.
“However, based on the documentation of RES 3700 POS, the attackers should not be able to access some of the most sensitive information – such as credit card numbers and expiration dates – which is protected by encryption. The only customer data stored in the clear and thus available to the attackers should be cardholder names,” cautions ESET researcher Martin Smolár, who discovered ModPipe.
“Probably the most intriguing parts of ModPipe are its downloadable modules. We’ve been aware of their existence since the end of 2019, when we first found and analyzed its basic components,” explains Smolár.
- GetMicInfo targets data related to the MICROS POS, including passwords tied to two database usernames predefined by the manufacturer. This module can intercept and decrypt these database passwords, using a specifically designed algorithm.
- ModScan 2.20 collects additional information about the installed MICROS POS environment on the machines by scanning selected IP addresses.
- ProcList with main purpose is to collect information about currently running processes on the machine.
“ModPipe’s architecture, modules and their capabilities also indicate that its writers have extensive knowledge of the targeted RES 3700 POS software. The proficiency of the operators could stem from multiple scenarios, including stealing and reverse engineering the proprietary software product, misusing its leaked parts or buying code from an underground market,” adds Smolár.
What can you do?
To keep the operators behind ModPipe at bay, potential victims in the hospitality sector as well as any other businesses using the RES 3700 POS are advised to:
- Use the latest version of the software.
- Use it on devices that run updated operating system and software.
- Use reliable multilayered security software that can detect ModPipe and similar threats.
For the first time, there’s a year-over-year reduction in the cybersecurity workforce gap, due in part to increased talent entry into the field and uncertain demand due to the economic impact of COVID-19, (ISC)² finds.
The research, conducted from mid-April through June 2020, also provides insights from cybersecurity professionals about their organizations’ COVID-19 pandemic response, and the massive effort required to quickly and securely transition their staffs to remote working environments.
Decrease in the global cybersecurity workforce shortage
The study reveals that the cybersecurity profession experienced substantial growth in its global ranks, increasing to 3.5 million individuals currently working in the field, an addition of 700,000 professionals or 25% more than last year’s workforce estimate.
The research also indicates a corresponding decrease in the global workforce shortage, now down to 3.12 million from the 4.07 million shortage reported last year. Data suggests that employment in the field now needs to grow by approximately 41% in the U.S. and 89% worldwide in order to fill the talent gap, which remains a top concern of professionals.
In a historically unprecedented year, the study also focused on how security teams and professionals were impacted by COVID-19. The data shows that 30% of cybersecurity professionals faced a deadline of one day or less to transition their organizations’ staff to remote work and to secure their newly transformed IT environments.
92% of respondents indicated that their organization was “somewhat” or “very” prepared to respond, and just 18% saw security incidents increase during this time.
“The response to COVID-19 by the community and their ability to help securely migrate entire organizational systems to remote work, almost overnight, has been an unprecedented success and a best-case scenario in a lot of ways. Cybersecurity professionals rose to the challenge and solidified their value to their organizations.”
- Job satisfaction rates increased year-over-year, with 75% of respondents saying they are either “somewhat” or “very” satisfied
- The average annual cybersecurity salary is highest in North America at $112,000
- 56% of respondents say their organizations are at risk due to cybersecurity staff shortages
- Cybersecurity practitioners are concerned that security budgets will be impacted by revenue losses related to COVID-19. 54% are concerned about personnel spending while 51% are concerned about technology spending
- 23% said that they or a peer had been laid off as a result of the pandemic
- 78% of cybersecurity professionals who still need to work from an office say they are either “somewhat” or “very” concerned about their personal safety in relation to COVID-19
- Cloud computing security is far and away the most in-demand skillset, with 40% of respondents indicating they plan to develop it over the next two years
- Just 49% of those in the field hold degrees in computer and information sciences, highlighting the fact that many of the professionals responsible for cybersecurity come from other areas of expertise
Organizations underwent an unprecedented IT change this year amid a massive shift to remote work, accelerating adoption of cloud technology, Duo Security reveals.
The security implications of this transition will reverberate for years to come, as the hybrid workplace demands the workforce to be secure, connected and productive from anywhere.
The report details how organizations, with a mandate to rapidly transition their entire workforce to remote, turned to remote access technologies such as VPN and RDP, among numerous other efforts.
As a result, authentication activity to these technologies swelled 60%. A complementary survey recently found that 96% of organizations made cybersecurity policy changes during the COVID-19, with more than half implementing MFA.
Cloud adoption also accelerated
Daily authentications to cloud applications surged 40% during the first few months of the pandemic, the bulk of which came from enterprise and mid-sized organizations looking to ensure secure access to various cloud services.
As organizations scrambled to acquire the requisite equipment to support remote work, employees relied on personal or unmanaged devices in the interim. Consequently, blocked access attempts due to out-of-date devices skyrocketed 90% in March. That figure fell precipitously in April, indicating healthier devices and decreased risk of breach due to malware.
“As the pandemic began, the priority for many organizations was keeping the lights on and accepting risk in order to accomplish this end,” said Dave Lewis, Global Advisory CISO, Duo Security at Cisco. “Attention has now turned towards lessening risk by implementing a more mature and modern security approach that accounts for a traditional corporate perimeter that has been completely upended.”
Additional report findings
So long, SMS – The prevalence of SIM-swapping attacks has driven organizations to strengthen their authentication schemes. Year-over-year, the percentage of organizations that enforce a policy to disallow SMS authentication nearly doubled from 8.7% to 16.1%.
Biometrics booming – Biometrics are nearly ubiquitous across enterprise users, paving the way for a passwordless future. Eighty percent of mobile devices used for work have biometrics configured, up 12% the past five years.
Cloud apps on pace to pass on-premises apps – Use of cloud apps are on pace to surpass use of on-premises apps by next year, accelerated by the shift to remote work. Cloud applications make up 13.2% of total authentications, a 5.4% increase year-over-year, while on-premises applications encompass 18.5% of total authentications, down 1.5% since last year.
Apple devices 3.5 times more likely to update quickly vs. Android – Ecosystem differences have security consequences. On June 1, Apple iOS and Android both issued software updates to patch critical vulnerabilities in their respective operating systems.
iOS devices were 3.5 times more likely to be updated within 30 days of a security update or patch, compared to Android.
Windows 7 lingers in healthcare despite security risks – More than 30% of Windows devices in healthcare organizations still run Windows 7, despite end-of-life status, compared with 10% of organizations across Duo’s customer base.
Healthcare providers are often unable to update deprecated operating systems due to compliance requirements and restrictive terms and conditions of third-party software vendors.
Windows devices, Chrome browser dominate business IT – Windows continues its dominance in the enterprise, accounting for 59% of devices used to access protected applications, followed by macOS at 23%. Overall, mobile devices account for 15% of corporate access (iOS: 11.4%, Android: 3.7%).
On the browser side, Chrome is king with 44% of total browser authentications, resulting in stronger security hygiene overall for organizations.
UK and EU trail US in securing cloud – United Kingdom and European Union-based organizations trail US-based enterprises in user authentications to cloud applications, signaling less cloud use overall or a larger share of applications not protected by MFA.