Misconfigured or unsecured databases exposed on the open web are a fact of life. We hear about some of them because security researchers tell us how they discovered them, pinpointed their owners and alerted them, but many others are found by attackers first.
It used to take months to scan the Internet looking for open systems, but attackers now have access to free and easy-to-use scanning tools that can find them in less than an hour.
“There’s no way to leave unsecured data online without opening the data up to attack. This is why it’s crucial to always enable security and authentication features when setting up databases, so that your organization avoids this risk altogether.”
What do attackers do with exposed databases?
Bressers has been involved in the security of products and projects – especially open-source – for a very long time. In the past two decades, he created the product security division at Progeny Linux Systems and worked as a manager of the Red Hat product security team and headed the security strategy in Red Hat’s Platform Business Unit.
He now manages bug bounties, penetration testing and security vulnerability programs for Elastic’s products, as well as the company’s efforts to improve application security, add new and improve existing security features as needed or requested by customers.
The problem with exposed Elasticsearch (MariaDB, MongoDB, etc.) databases, he says, is that they are often left unsecured by developers by mistake and companies don’t discover the exposure quickly.
“The scanning tools do most of the work, so it’s up to the attacker to decide if the database has any data worth stealing,” he noted, and pointed out that this isn’t hacking, exactly – it’s mining of open services.
Attackers can quickly exfiltrate the accessible data, hold it for ransom, sell it to the highest bidder, modify it or simply delete it all.
“Sometimes there’s no clear advantage or motive. For example, this summer saw a string of cyberattacks called the Meow Bot attacks that have affected at least 25,000 databases so far. The attacker replaced the contents of every afflicted database with the word ‘meow’ but has not been identified or revealed anything behind the purpose of the attack,” he explained.
Advice for organizations that use clustered databases
Open-source database platforms such as Elasticsearch have built-in security to prevent attacks of this nature, but developers often disable those features in haste or due to a lack of understanding that their actions can put customer data at risk, Bressers says.
“The most important thing to keep in mind when trying to secure data is having a clear understanding of what you are securing and what it means to your organization. How sensitive is the data? What level of security needs to be applied? Who should have access?” he explained.
“Sometimes working with a partner who is an expert at running a modern database is a more secure alternative than doing it yourself. Sometimes it’s not. Modern data management is a new problem for many organizations; make sure your people understand the opportunities and challenges. And most importantly, make sure they have the tools and training.”
Secondly, he says, companies should set up external scanning systems that continuously check for exposed databases.
“These may be the same tools used by attackers, but they immediately notify security teams when a developer has mistakenly left sensitive data unlocked. For example, a free scanner is available from Shadowserver.”
Elastic offers information and documentation on how to enable the security features of Elasticsearch databases and prevent exposure, he adds and points out that security is enabled by default in their Elasticsearch Service on Elastic Cloud and cannot be disabled.
Defense in depth
No organization will ever be 100% safe, but steps can be taken to decrease a company’s attack surface. “Defense in depth” is the name of the game, Bressers says, and in this case, it should include the following security layers:
- Discovery of data exposure (using the previously mentioned external scanning systems)
- Strong authentication (SSO or usernames/passwords)
- Prioritization of data access (e.g., HR may only need access to employee information and the accounting department may only need access to budget and tax data)
- Deployment of monitoring infrastructures and automated solutions that can quickly identify potential problems before they become emergencies, isolate infected databases, and flag to support and IT teams for next steps
He also advises organizations that don’t have the internal expertise to set security configurations and managing a clustered database to hire of service providers that can handle data management and have a strong security portfolio, and to always have a mitigation plan in place and rehearse it with their IT and security teams so that when something does happen, they can execute a swift and intentional response.
The financial services sector is outperforming other industries, both in its adoption of database DevOps, and its use of monitoring to track database performance and deployments, a newly released edition of Redgate’s 2020 State of Database Monitoring Report has revealed.
Respondents were surveyed in April 2020 while most were in lockdown due to COVID-19. Those responses form the foundation of the report and reveal the significant adoption of third-party database monitoring tools in financial services, which may reflect the ongoing situation where many disparate IT teams are working remotely. This has increased the need to monitor database environments, particularly when zero downtime is now expected – often demanded – in the sector.
The report shows that 61% of those in financial services deploy database changes once a week or more, compared to 43% across other sectors, and 52% deploy multiple times per day or week, up from 35% in other sectors.
Server estates are also larger for financial services, with 36% having between 50 and 500 instances against 26% in other sectors. Notably, the biggest increase has been in estates with over 1,000 instances, which are up eight percentage points year-on-year.
These results have likely contributed to the 66% of companies in financial services reporting that they use a paid-for monitoring tool, compared to only 39% of respondents across other sectors.
To further complicate the picture, the cloud is changing the nature of those estates. 39 percent of those in financial services already host some or all of their databases in the cloud, and the report shows that migrating to and integrating with the cloud is the biggest challenge facing the sector in the next 12 months.
Yet, despite the far higher rate of database deployments and bigger, more mixed estates to manage, failed deployments are detected earlier and recovered from faster. 49 percent of failed deployments are detected within 10 minutes and 32% recover from those failed deployments in 10 minutes or under. In other sectors this falls to 39% and 24%, respectively.
For Grant Fritchey, Microsoft Data Platform MVP and Redgate Advocate, this is where the real value of advanced, third-party monitoring tools lies. “With faster deployments and large, hybrid estates, it’s no longer enough to monitor the usual suspects like CPU, disk space, memory and I/O capacity,” says Fritchey.
“Sectors like financial services – and Healthcare and IT – have recognized they need customizable alerts for the operational and performance issues they face, and every deployment displayed on a timeline alongside key SQL Server metrics. That way, when a bad deployment occurs, they can dive into the details, investigate the cause and remedy it immediately. If you can’t do that, frankly, you’ll have a hard time doing DevOps.”
On average, an exposed Mongo database is breached within 13 hours of being connected to the internet. The fastest breach recorded was carried out 9 minutes after the database was set up, according to Intruder.
MongoDB is a general purpose, document-based, distributed database that consistently ranks in the top 5 most-used databases worldwide. It is used by a wide range of organizations all over the globe to store and secure sensitive application and customer data.
There are 80,000 exposed MongoDB services on the internet, of which 20,000 were unsecured. Of those unsecured databases, 15,000 are already infected with ransomware.
How MongoDB attacks are carried out
After seeing how consistently database breaches were occurring, Intruder planted honeypots to find out how these attacks happen, where the threats are coming from, and how fast it takes place. Intruder set up a number of unsecured MongoDB honeypots across the web, each filled with fake data. The network traffic was monitored for malicious activity and if password hashes were exfiltrated and seen crossing the wire, this would indicate that a database was breached.
The research shows that MongoDB is subject to continual attacks when exposed to the internet. Attacks are carried out automatically and indiscriminately and on average an unsecured database is compromised less than 24 hours after going online.
At least one of the honeypots was held to ransom within a minute of connecting. The attacker erased the database’s tables and replaced them with a ransom note, requesting payment in Bitcoin for recovery of the data:
Where do attacks come from?
Attacks originated from locations all over the globe, though attackers routinely hide their true location, so there’s often no way to tell where attacks are really coming from. The fastest breach came from an attacker from Russian ISP Skynet and over half of the breaches originated from IP addresses owned by a Romanian VPS provider.
“It’s quite possible that some of the activity recorded was from security researchers looking for their next headline or data for their breach database. However, when it comes to a company’s security reputation, it often doesn’t matter whether the data is breached by a malicious attacker or a well-meaning researcher,” said Chris Wallis, CEO, Intruder.
“Even if security teams can detect an unsecured database and recognise its potential severity, responding to and containing such a misconfiguration in less than 13 hours may be a tall order, let alone in under 9 minutes. Prevention is a much stronger defence than cure.”
Reposify unveiled research findings of critical asset exposures and vulnerabilities in attack surfaces of the world’s leading multinational banks.
Researchers measured the prevalence of exposed sensitive assets including exposed databases, remote login services, development tools and additional assets for 25 multinational banks and their 350+ subsidiaries.
Banks deal with exposed database threat
- 23% of banks had at least one misconfigured database exposed to the internet resulting in potential data leakage issues
- 54% of the banks had at least one RDP exposed to the internet
- 31% of banks had at least one vulnerability to Remote Code Execution
- Multiple unsecured FTP servers with anonymous authentication were discovered
The myriad of exposures such as RDP, unsecured FTP and misconfigured development tools can be leveraged by attackers to gain unauthorized access to banks’ internal networks and result in data breach attacks. The exposed databases which were discovered place customer and other sensitive data at direct and imminent risk of exposure.
Banking industry DX challenges
In recent years, the banking industry has gone through a massive digital transformation. Alongside the many benefits, the increase in digitization and connectivity have created great security challenges and made the banking industry even more susceptible to cyber-attacks.
“The interconnectedness of IT systems and growth in third-party partners have expanded the external attack surface and potential weak points.” said Yaron Tal, CEO, Reposify.
“Banks’ IT ecosystems are in a constant state of flux and network perimeters are extending well beyond firewalls and control systems. Banks’ actual attack surfaces are simply much bigger than most realize.”
Visibility of internet facing assets inventory
Banks typically have well-established security programs which are heavily regulated by various institutions yet 84% of the exposed assets are likely to be under IT and security teams’ radars and out of the scope of traditional asset management and security tools.
Gaining visibility of the complete internet facing assets inventory is critical. External and continuous view allows teams to know at any given moment which of their known or unknown devices and services are exposed to the internet and to take steps to proactively manage and mitigate the risks.
A misconfigured database containing 7 terabytes of sensitive user and company information related to adult live streaming site CAM4 has been found leaking data.
The database apparently contains 10.88 billion records, which contain different combinations of sensitive information such as: names, email addresses, usernames, gender preference and sexual orientation, payment information, IP addresses, as well as user and inter-user conversations, chat transcripts between users and CAM4, fraud and spam detection logs, and hashed passwords.
CAM4 leaking data
Luckily for the users and Irish company Granity Entertainment, which owns CAM4.com, the discovery was made by security researchers with Safety Detectives, not malicious actors.
Once the researchers tied the leaking database to the source, they notified Granity Entertainment and the database was pulled offline.
The researchers’ analysis of the leaked data revealed around 11 million records containing emails, 26+ million entries with passwords hashes, and a few hundred entries containing full names, credit card types and payment amounts.
As the researchers noted, the various data could be used to identify some users.
“User emails could be targeted with leaked data then used maliciously to trigger clicks with phishing and malware scams deployed against unsuspecting targets,” they pointed out.
“The fact that a large amount of email content came from popular domains such as Gmail, Hotmail and iCloud — domains that offer supplementary services such as cloud-storage and business tools — means that compromised CAM4 users could potentially see huge volumes of personal data including photographs, videos and related business information leaked to hackers — assuming their accounts were eventually hacked via phishing as one example. This information could then be weaponized to compromise other individuals and groups such as family members, colleagues, employees and clients of other businesses.”
In addition to this, some of the data could be used to extort money from CAM4 users. While there is nothing to prevent cyber extortionists to target random users/email addresses with threatening emails, the probability of success is higher if they can demonstrate that they do know something about the victim.
Compromised fraud detection logs, on the other hand, can enable hackers to understand how cybersecurity systems have been set up, and website backend data could be harnessed to exploit the website and create threats including ransomware attacks, the researchers pointed out.
There is no indication at the moment that the database has been accessed by anyone else except authorized users and the researchers. Still, if it was exposed / unsecured long enough, chances are good that someone else did have a peek.
The company will hopefully investigate the matter further and notify affected users if necessary.
An IT startup has developed a novel blockchain-based approach for secure linking of databases, called ChainifyDB.
“Our software resembles keyhole surgery. With a barely noticeable procedure we enhance existing database infrastructures with blockchain-based security features. Our software is seamlessly compatible with the most common database management systems, which drastically reduces the barrier to entry for secure digital transactions,” explains Jens Dittrich, Professor of Computer Science at Saarland University at Saarbrücken, Germany.
How does ChainifyDB work?
The system offers various mechanisms for a trustworthy data exchange between several parties. The following example shows one of its use cases.
Assume some doctors are treating the same patient and want to maintain his or her patient file together. To do this, the doctors would have to install the Saarbrücken researchers’ software on their existing database management systems. Then, they could jointly create a data network.
In this network, the doctors set up a shared table in which they enter the patient file for the shared patient. “If a doctor changes something in his table, it affects all other tables in the network. Subsequent changes to older table states are only possible if all doctors in the network agree,” explains Jens Dittrich.
Another special feature: If something about the table is changed, the focus is not on the change itself, but on its result. If the result is identical in all tables in the network, the changes can be accepted. If not, the consensus process starts again.
“This makes the system tamper-proof and guarantees that all network participants’ tables always have the same status. Furthermore, only the shared data in the connected tables is visible to other network participants; all other contents of the home database remain private”, emphasizes Dr. Felix Martin Schuhknecht, Principal Investigator of the project.
Advantages for security-critical situations
The new software offers advantages especially for security-critical situations, such as hacker attacks or when business partners cannot completely trust each other. Malicious participants can be excluded from a network without impairing its functionality.
If a former participant is to be reinstated, the remaining network participants only have to agree on a “correct” table state. The previously suspended partner can then be set to this state. “As far as we know, this function is not yet offered by any comparable software,” adds Dittrich.
In order to bring ChainifyDB to market, the German Federal Ministry of Education and Research is supporting the Saarbrücken researchers’ start-up, which is currently being founded, with 840,000 euros.
UK ISP and telecom provider Virgin Media has confirmed on Thursday that one of its unsecured marketing databases had been accessed by on at least one occasion without permission (though the extent of the access is still unknown).
The database, containing contact and service details of approximately 900,000 customers, was not technically breached.
“The incident did not occur due to a hack but as a result of the database being incorrectly configured,” Virgin media said. Access to it was not secured and the database was accessible online for 10 months.
There were no financial details or passwords in it, but the potentially compromised information is enough for skilled phishers to mount attacks via email or phone, trying to get the affected customers to give out additional sensitive information that could be used to steal their identity.
Also on Wednesday, Comparitech revealed that, in January, its security research team discovered a similarly unsecured and exposed database with 200 million records containing a wide range of property-related data on US residents.
“The largest portion of the data is a mix of personal, demographic, and property information,” shared Comparitech’s Paul Bischoff.
The records are pretty thorough – they contain individuals’ name, address, email address, age, gender, ethnicity, employment into, credit rating, investment preferences, income, net worth, as well as information on their habits (e.g., whether they travel, donate to charity, have pets, etc.) and about their property (market value, mortgage amount, tax assessment info, etc.).
“The detailed personal, demographic, and property information contained in this dataset is a gold mine for spammers, scammers, and cybercriminals who run phishing campaigns. The data allows criminals not only to target specific people, but craft a more convincing message,” Bischoff pointed out.
Interestingly enough, they were unable to discover who is the owner of the database. As it was hosted on an exposed Google Cloud server, the alerted the company and it was taken offline on March 4.
The problem with data in the cloud
Eldad Chai, CEO and co-founder of data protection and governance firm Satori Cyber, says that happens because today’s model for data security is completely inadequate for the cloud.
“For years, data has been couched in layers of security, from network security to application security, end-point security to anomaly detection. This approach ensured that gaps were more or less covered and significantly limited the real threat of a data leak. Unfortunately, this layered security approach has failed to be implemented as companies migrate to the cloud—and nothing else has taken its place,” he noted.
“Relying on cloud configuration management alone cannot keep companies safe from data leaks and is many steps short of keeping big data stores safe. It is enough for one employee to replicate a VM housing sensitive data to an environment that is not configured to hold it to bring the whole [thing] down.”
While necessary, cloud configuration management shouldn’t be the last line of data security defense, he says, as it’s not isolated from environment changes, simple to configure and enforce, transparent and universal (running on any environment).
The healthcare industry has significantly more exposed attack surfaces than any other industry surveyed, according to Censys’s research findings of cloud risks and cloud maturity by industry, revealed at RSA Conference 2020.
Leveraging the Censys SaaS Platform, company researchers measured the occurrence of exposed databases and exposed remote login services – two key indicators of modern security risks – for the ten largest companies by revenue in seven major industries (Automotive, Energy, Hotels, Insurance, Manufacturing, Healthcare and Financials).
The healthcare industry showed significantly more exposed databases and more exposed remote login services.
Exposed databases by industry
Composed of pharmacies, healthcare providers, insurance providers and pharmaceutical manufacturers, the healthcare industry had an average of 13 exposed databases per company. The energy industry proved the least at-risk with only one exposed database per company.
Exposed Remote Desktop Protocol (RDP)
Healthcare also had the most exposed RDP servers per company with an average of eight. However this average is caused by one outlier with ten times the number of exposed RDP servers than the next highest company.
While cloud databases and remote working solutions provide a great deal of convenience and enable modern web applications, both provide attackers a common entry point and drive data breach attacks. Internet exposed databases put customer data at risk and RDPs pose risks of credential stuffing, reuse of stolen credentials, and specific software exploits.
“Along with enormous agility for the modern enterprise, the rise of cloud infrastructure in high-tech industries has created an incredible security challenge that only continues to grow,” said Jose Nazario, Ph.D., Principal R&D Engineer at Censys. “While all industries have guilty parties, healthcare’s attack surface is simply much bigger than they realize.”
In order to protect against breaches, companies must first gain visibility using a continuous attack surface monitoring platform. This enables businesses to be alerted to risks when they occur. Companies can then remediate the issue by reconfiguring an application to listen on a private network, employing VPN software, or simply ensuring a firewall ruleset is properly configured.
Palo Alto Networks released research showing how vulnerabilities in the development of cloud infrastructure are creating significant security risks.
Alerts and events for organizations operating in the cloud
The Unit 42 Cloud Threat Report: Spring 2020 investigates why cloud misconfigurations happen so frequently. It finds that as organizations move to automate more of their cloud infrastructure build processes, they are adopting and creating new infrastructure as code (IaC) templates. Without the help of the right security tools and processes, these infrastructure building blocks are being crafted with rampant vulnerabilities.
- 199,000+ insecure templates in use: Unit 42 researchers identified high- and medium-severity vulnerabilities throughout their investigation. Previous research by Unit 42 shows 65% of cloud incidents were due to simple misconfigurations. These new report findings shed light on why cloud misconfigurations are so common.
- 43% of cloud databases are not encrypted: Keeping data encrypted not only prevents attackers from reading stored information, it is a requirement of compliance standards, such as HIPAA.
- 60% of cloud storage services have logging disabled: Storage logging is critical when attempting to determine the scale of the damage in cloud incidents, such as the U.S. voter records leak in 2017 or the National Credit Federation data leak that same year.
- Cybercrime groups are using the cloud for cryptojacking: Adversary groups likely associated with China, including Rocke, 8220 Mining Group and Pacha, are stealing cloud resources. They are mining for Monero, likely through public mining pools or their own pools.
While IaC offers organizations the benefit of enforcing security standards in a systematic way, this research shows that this capability is not yet being harnessed.
“It only takes one misconfiguration to compromise an entire cloud environment. We found 199,000 of them. The good news is infrastructure as code can offer security teams many benefits, such as enabling security to be injected early into the software development process and embedding it into the very building blocks of an organization’s cloud infrastructure,” said Matthew Chiodi, CSO of public cloud for Palo Alto Networks.