Digital privacy is paramount to the global community, but it must be balanced against the proliferation of digital-first crimes, including child sexual abuse, human trafficking, hate crimes, government suppression, and identity theft. The more the world connects with each other, the greater the tension between maintaining privacy and protecting those who could be victimized.
Global digital privacy
Online communication can connect and enrich people’s lives, but it is also being leveraged for malicious purposes. Bad actors can now reach a broader audience of potential victims, coordinate with others, share the most effective practices, and expand their illegal activities while being protected by a shield of online anonymity. The ability to scale harmful activities is as efficient as scaling community-building practices. The Internet has provided an environment for predators to thrive.
The challenge is to respect the rights of individuals while still allowing systematic controls to protect, dissuade and, when necessary, investigate for prosecution those who are purposefully undermining the safety of global citizens. Just as in the physical world, law enforcement is tasked with protecting people from criminals.
They require the ability to investigate crimes in a timely manner and identify suspects for prosecution. The right to privacy and the risk of being victimized are in conflict. Users, companies, and governments are intertwined and struggling to effectively understand and deal with legacy and evolving threats.
As this landscape is evolving, we wanted to start the conversation on what is the right balance of privacy and safety online.
A zero-sum game of privacy and safety
Currently, there is a perception of a zero-sum game for privacy and safety in the digital world. Expectations, regulations, and enforcement are fragmented, confusing, and inadequate. In 2009, the Child Online Protection Act (COPA) was overturned by the Supreme Court, finding that it violated first amendment rights.
The practical implications of this legislative change, coupled with Section 230 of the Communications Decency Act (CDA) of 1996, which holds that platforms are not responsible for what third-party publishers post on them, is that children are no longer protected from adult content by websites – the responsibility was transferred to their parents.
The Children’s Online Privacy Protection Act of 1998 (COPPA) is the current law that protects child data privacy online. It mandates that any company that has users under that age of 13 on their platforms must prove that the parents gave their permission (often accomplished by entering credit card information to prove identity) and can’t retain data from children under 13.
Many platforms avoid addressing these restrictions by stating no one under the age of 13 is allowed on their platforms, but they do not have practices in place for proof of identity to enforce them in a meaningful way. They usually use a “check the box if you are over 13“ honor system, so many children online end up lacking the privacy or safety protections that COPPA was meant to provide them.
Parents who are raising this generation of digital natives are digital immigrants themselves. They were young enough to adjust to the trends of social, mobile, and cloud; but they were mostly in their 20s when they gained access to it. This has left a significant knowledge gap in what cyberbullying, grooming, and sextortion tweens and teens experience.
This generation of teens is exhibiting the highest rates of mental health issues and suicides we have seen to date. This teen suicide trend is even more alarming when you factor in that, according to the Center for Disease Control, deaths from youth suicide are only part of the problem, because more young people survive suicide attempts than actually die.
Contributing author: Matthew Rosenquist, CISO, Eclipz.io.
A global research report by Lenovo highlights the triumphs, challenges and the consequences of the sudden shift to work-from-home (WFH) during the COVID-19 pandemic and how companies and their IT departments can power the new era of working remotely that will follow.
The study looks at how employees worldwide are responding to the “new normal” after 72 percent of those surveyed confirmed a shift in their daily work dynamic in the last three months. Employees feel more connected and more productive than ever before as they WFH, but the data shows financial, physical and emotional downsides for the global workforce.
“This data gave us valuable insights on the complex relationship employees have with technology as work and personal are becoming more intertwined with the increase in working from home,” commented Dilip Bhatia, Vice President of Global User and Customer Experience at Lenovo.
“Respondents globally feel more reliant on their work computers and more productive but have concerns about data security and want their companies to invest in more tech training. We’re using these takeaways to improve the development of our smart technology and better empower remote workers of tomorrow.”
Productivity, connectivity, and IT independence increase
Survey respondents around the world are embracing working away from the office – yet feel more connected to their devices than ever as the ‘office’ becomes wherever their technology is.
- Eighty-five percent of those surveyed feel more reliant on their work PCs (laptops and/or desktop computer) than they did working from the office.
- 63 percent of the global workforce surveyed feel they are more productive working from home than when they were in the office.
- Fifty-two percent of respondents believe they will continue to WFH more than they did pre-COVID-19 – even after social distancing measures lift.
This new confidence in working remotely has increased organizations’ need for customizable, modern IT solutions to be deployed at scale. Seventy-nine percent of participants agree that they have had to be their own IT person while working from home, and a majority of those surveyed believe employers should invest in more tech training to power WFH in the future.
WFH during the pandemic: Productivity can come with downsides
In such a quick, dramatic shift to WFH that the pandemic brought on, workers say they have had to make personal investments on tech when their employers have not.
- Seven-in-ten employees surveyed globally said they purchased new technology to navigate working remotely
- Nearly 40 percent of those surveyed have had to partially or fully fund their own tech upgrades
- US respondents say they have personally spent an average of $348 to upgrade or improve technology while working at home due to COVID-19 – roughly $70 higher than the global average ($273), and the second-highest among 10 markets surveyed
New ways of working have also brought on a set of literal aches and pains. Seventy-one percent of workers surveyed complain of new or worsening conditions, including headaches, back and neck pains, difficulty sleeping and more.
Having a proper WFH setup is important to minimizing discomfort, including proper furniture and a larger-sized external monitor that can ergonomically adjust to natural eye-level.
Making time for breaks is also important since many built-in workday breaks for office workers (stretching, getting up to get coffee, going out for lunch, etc.) occur in different rhythms while working remotely.
Along with physical ailments, workers around the world identified other top challenges to the WFH experience: reduced personal connections with coworkers, an inability to separate work life from home life, and finding it hard to concentrate during work hours due to distractions at home.
Training and implementation of high-quality video conferencing capabilities such as noise-cancelling headphones and webcams on the work PC, tablet or phone can help employees feel more connected with colleagues and feeling less distracted at home.
Naturally as technology has powered WFH around the world, surveyed workers also expressed overall concerns overall around security and being heavily reliant on tech connectivity to get the job done.
Employees of all ages agree their top tech-specific concern is how it makes their companies more vulnerable to data breaches. As a result, enhanced security will need to be built into employees’ hardware, software and services (including deployment, set-up and maintenance) from the get-go and is especially critical within today’s remote work environment.
The study also offers important guidance to employers around the world to embrace the new technology normal beyond the pandemic and into the future.
Flexibility isn’t just expected, it’s required
Overall, surveyed employees globally expressed mixed feelings about work in a post-COVID world – while some employees expressed being happy (27 percent) and excited (21 percent) about working from home forever, others feel neutral (22 percent) and conflicted (17 percent).
In light of this, it is more important than ever to give employees flexibility and the required tech to WFH so they don’t have to spend their own money on tech upgrades for work.
Tech should facilitate balance, collaboration, multi-tasking
Although most respondents say tech makes them efficient and more productive, employees identified other ways that tech could improve to help them gain an advantage at work:
- Help them better maintain work life balance
- Make it easier for employees to collaborate with others at outside companies and organizations
- Assist with multi-tasking and switching gears between projects more frequently
- Automate some of their daily tasks
More 5G, please!
Although emerging technologies may have been a new subject in the past, employees are now expressing excitement about the role it plays in improving the WFH experience.
When asked which emerging technologies would have the most positive impact on their job within the next few years, employees ranked 5G wireless network technology and AI/ML as their top choices.
When implementing these technologies, companies should seek employee input on where these can make the most impact within their jobs. 5G provides a strong and more secure connection while giving employees the ability to move around, while AI can help automate routine responsibilities.
A majority of employees have also expressed they are hopeful that emerging technologies can help improve work/life balance.
The global pandemic has seen the web take center stage. Banking, retail and other industries have seen large spikes in web traffic, and this trend is expected to become permanent.
Global brands fail to implement security controls
As attackers ramp up efforts to exploit this crisis, a slew of high-profile attacks on global brands and record-breaking fines for GDPR breaches have had little impact on client-side security and data protection deployments.
In many cases, this data leakage is taking place via whitelisted, legitimate applications, without the website owner’s knowledge. What this report indicates is that data risk is everywhere and effective controls are rarely applied.
Key findings highlight the scale of vulnerability and that the majority of global brands fail to deploy adequate security controls to guard against client-side attacks.
This website supply chain leverages client-side connections that operate outside the span of effective control in 98% of sampled websites. The client-side is a primary attack vector for website attacks today.
Websites expose data to an average of 17 domains
Despite increasing numbers of high-profile breaches, forms, found on 92% of websites expose data to an average of 17 domains. This is PII, credentials, card transactions, and medical records.
While most users would reasonably expect this data to be accessible to the website owner’s servers and perhaps a payment clearing house, the analysis shows that this data is exposed to nearly 10X more domains than intended.
Nearly one-third of websites studied expose data to more than 20 domains. This provides some insight into how and why attacks like Magecart, formjacking and card skimming continue largely unabated.
No attack is more widespread than XSS
Standards-based security controls exist that can prevent these attacks. They are infrequently applied.
Unfortunately, despite high-profile risks and the availability of controls, there has been no significant increase in the adoption of security capable of preventing client-side attacks:
- Over 99% of websites are at risk from trusted, whitelisted domains like Google Analytics. These can be leveraged to exfiltrate data, underscoring the need for continuous PII leakage monitoring and prevention. This has significant implications for data privacy, and by extension, GDPR and CCPA.
- 30% of the websites analyzed had implemented security policies – an encouraging 10% increase over 2019. However…
- Only 1.1% of websites were found to have effective security in place – an 11% decline from 2019. It indicates that while deployment volume went up, effectiveness declined more steeply. The attackers have the upper hand largely because we are not playing effective defense.
Online voting is likely to shape future election cycles, according to a study from OneLogin. 59% of respondents expect online voting will become a reality within five years.
Online voting demographics
Though various demographics differ in their opinions about online voting, respondents shared concerns about the possibility of fraud and compromised data security.
49% of millennial and 55% of Gen Z voters believe that online options would make them more likely to vote while only 35% of those ages 74+ felt the same. Digital voting might also assist during the pandemic as 26% of respondents indicated COVID-19 could impact their likelihood of voting in the general election this fall.
An online voting option could also boost voter turnout among minority groups: 55% of black and 54% of Hispanic voters said an online voting option would make them more likely to vote this fall, compared to 42% of whites.
By party lines, 37% of Republicans do not want online voting compared to 12% of Democrats. Additionally, 43% of self-identified Trump supporters do not want online voting, compared to 12% among non-supporters.
Online voting and cybersecurity
Regardless of these divisions, respondents came together around two issues: convenience and security. Among those in favor of moving to online voting, 68% liked the potential convenience and 61% believed it would increase voter turnout. For those against it, the opportunity for fraud (77%) and lack of security (75%) were major concerns.
“We were curious to understand the opinions around online voting and cybersecurity. The results speak to the demand and call for safe and secure identity management, today, in the 2020 election, and beyond.”
Most security experts agree that the process to cast a secure online vote would require multiple steps of authentication. Although 61% of respondents were willing to take up to three steps, 13% weren’t willing to take any security steps at all if voting online.
Similarly, 48% of voters would spend no more than five minutes logging in to vote, with only 5% willing to take more than 30 minutes, even though there are often long waits for in-person voting.
Who is the most trustworthy?
Trust will be another hurdle, as voters are uncertain which group is the most trustworthy to manage and administer online voting. Only 25% felt the government was best equipped, while 21% believed a private company could do it best and 20% would rely on a big tech company. Over 35% stated they wouldn’t trust any of the choices listed.
Other findings from the study include:
- Pandemic politics: 31% of those who disapprove of President Trump say the pandemic is influencing them towards not voting compared to only 17% among Trump supporters.
- Online turnout: 45% say that if they could vote online, they would be more likely to vote in the general election this fall while only 6% say they would be less likely to vote. 49% were the same either way.
- Disenfranchisement: Out of those who are not in favor of moving to online voting, 44% believe it would disenfranchise people who are computer illiterate. 61% of those ages 74+ have this concern.
- Voting by mail: 1 in 3 rural voters have security concerns with voting by mail, compared to 1 in 4 from urban/suburban areas. 46% of Trump supporters are worried about security and fraud with voting by mail, compared to just 16% among those who don’t support Trump.
Since rolling out in May 2018, there have been 340 GDPR fines issued by European data protection authorities. Every one of the 28 EU nations, plus the United Kingdom, has issued at least one GDPR fine, Privacy Affairs finds.
Whilst GDPR sets out the regulatory framework that all EU countries must follow, each member state legislates independently and is permitted to interpret the regulations differently and impose their own penalties to organizations that break the law.
Nations with the highest fines
- France: €51,100,000
- Italy: €39,452,000
- Germany: €26,492,925
- Austria: €18,070,100
- Sweden: €7,085,430
- Spain: €3,306,771
- Bulgaria: €3,238,850
- Netherlands: €3,490,000
- Poland: €1,162,648
- Norway: €985,400
Nations with the most fines
- Spain: 99
- Hungary: 32
- Romania: 29
- Germany: 28
- Bulgaria: 21
- Czech Republic: 13
- Belgium: 12
- Italy: 11
- Norway: 9
- Cyprus: 8
The second-highest number of fines comes from Hungary. The National Authority for Data Protection and Freedom of Information has issued 32 fines to date. The largest being €288,000 issued to an ISP for improper and non-secure storage of customers’ personal data.
UK organizations have been issued just seven fines, totalling over €640,000, by the Information Commissioner. The average penalty within the UK is €160,000. This does not include the potentially massive fines for Marriott International and British Airways that are still under review.
British Airways could face a fine of €204,600,000 for a data breach in 2019 that resulted in the loss of personal data of 500,000 customers.
Similarly, Marriott International suffered a breach that exposed 339 million people’s data. The hotel group faces a fine of €110,390,200.
The largest and highest GDPR fines
The largest GDPR fine to date was issued by French authorities to Google in January 2019. The €50 million was issued on the basis of “lack of transparency, inadequate information and lack of valid consent regarding ads personalization.”
Highest fines issued to private individuals:
- €20,000 issued to an individual in Spain for unlawful video surveillance of employees.
- €11,000 issued to a soccer coach in Austria who was found to be secretly filming female players while they were taking showers.
- €9,000 issued to another individual in Spain for unlawful video surveillance of employees.
- €2,500 issued to a person in Germany who sent emails to several recipients, where each could see the other recipients’ email addresses. Over 130 email addresses were visible.
- €2,200 issued to a person in Austria for having unlawfully filmed public areas using a private CCTV system. The system filmed parking lots, sidewalks, a garden area of a nearby property, and it also filmed the neighbors going in and out of their homes.
The media industry suffered 17 billion credential stuffing attacks between January 2018 and December 2019, according to a report from Akamai.
The apparent fourfold increase in attacks is partly attributable to the enhanced visibility into the threat landscape
The report found that 20% of the 88 billion total credential stuffing attacks observed during the reporting period targeted media companies.
Media companies present an attractive target
Media companies present an attractive target for criminals according to the report, which reveals a 63% year-over-year increase in attacks against the video media sector.
The report also shows 630% and 208% year-over-year increases in attacks against broadcast TV and video sites, respectively. At the same time, attacks targeting video services are up 98%, while those against video platforms dropped by 5%.
The marked uptick in attacks aimed at broadcast TV and video sites appear to coincide with an explosion of on-demand media content in 2019. In addition, two major video services launched last year with heavy support from consumer promotions. These types of sites and services are well aligned to the observed goals of the criminals who target them.
Much of the value in media industry accounts lies in the potential access to both compromised assets, like premium content, along with personal data according to Steve Ragan, author of the report.
“We’ve observed a trend in which criminals are combining credentials from a media account with access to stolen rewards points from local restaurants and marketing the nefarious offering as ‘date night’ packages. Once the criminals get a hold of the geographic location information in the compromised accounts, they can match them up to be sold as dinner and a movie,” Ragan explained in the report.
Attacks targeting published content
Video sites are not the sole focus of credential stuffing attacks within the media industry, however. The report notes a 7,000% increase in attacks targeting published content.
Newspapers, books and magazines sit squarely within the sights of cybercriminals, indicating that media of all types appear to be fair game when it comes to these types of attacks.
The United States was by far the top source of credential stuffing attacks against media companies with 1.1 billion in 2019, an increase of 162% over 2018. France and Russia were a distant second and third with 3.9 million and 2.4 million attacks, respectively.
India, was the most targeted country in 2019, enduring with 2.4 billion credential stuffing attacks. It was followed by the United States at 1.4 billion and the United Kingdom at 124 million.
“As long as we have usernames and passwords, we’re going to have criminals trying to compromise them and exploit valuable information,” Ragan explained.
“Password sharing and recycling are easily the two largest contributing factors in credential stuffing attacks. While educating consumers on good credential hygiene is critical to combating these attacks, it’s up to businesses to deploy stronger authentication methods and identify the right mix of technology, policies and expertise that can help protect customers without adversely impacting the user experience.”
Some of the shuffling of top target areas in Q1 2020 correlate with effects of the pandemic lockdowns in various parts of the world
Spike in malicious login attempts against European broadcasters
There was a large spike in malicious login attempts against European video service providers and broadcasters during the first quarter of 2020. One attack in late March, after many isolation protocols had been instituted, directed nearly 350,000,000 attempts against a single service provider over a 24-hour period.
Separately, one broadcaster well known across the region, was hit with a barrage of attacks over the course of the quarter with peaks that ranged in the billions.
Another noteworthy trend during the first quarter was the number of criminals sharing free access to newspaper accounts. Often offered as self-promotional vehicles, credential stuffing campaigns must still be initiated in order to steal the working username and password combinations that are given away.
Researchers also observed a decline in the cost of stolen account credentials over the course of the quarter, which traded for approximately $1 to $5 at the start and $10 to $45 for package offers of multiple services. Those prices fell as new accounts and lists of recycled credentials populated the market.
Video conference users should not post screen images of Zoom and other video conference sessions on social media, according to Ben-Gurion University of the Negev researchers, who easily identified people from public screenshots of video meetings on Zoom, Microsoft Teams and Google Meet.
Zoom image collage with detected information, along with extracted features of gender, age, face, and username
With the worldwide pandemic, millions of people of all ages have replaced face-to-face contact with video conferencing platforms to collaborate, educate and celebrate with co-workers, family and friends. In April 2020, nearly 500 million people were using these online systems. While there have been many privacy issues associated with video conferencing, the BGU researchers looked at what types of information they could extract from video collage images that were posted online or via social media.
“The findings in our paper indicate that it is relatively easy to collect thousands of publicly available images of video conference meetings and extract personal information about the participants, including their face images, age, gender, and full names,” says Dr. Michael Fire, BGU Department of Software and Information Systems Engineering (SISE). “This type of extracted data can vastly and easily jeopardize people’s security and privacy, affecting adults as well as young children and the elderly.”
The researchers report that is it possible to extract private information from collage images of meeting participants posted on Instagram and Twitter. They used image processing text recognition tools as well as social network analysis to explore the dataset of more than 15,700 collage images and more than 142,000 face images of meeting participants.
Artificial intelligence-based image-processing algorithms helped identify the same individual’s participation at different meetings by simply using either face recognition or other extracted user features like the image background.
The researchers were able to spot faces 80% of the time as well as detect gender and estimate age. Free web-based text recognition libraries allowed the BGU researchers to correctly determine nearly two-thirds of usernames from screenshots.
The researchers identified 1,153 people likely appeared in more than one meeting, as well as networks of Zoom users in which all the participants were coworkers. “This proves that the privacy and security of individuals and companies are at risk from data exposed on video conference meetings,” according to the research team which also includes BGU SISE researchers Dima Kagan and Dr. Galit Fuhrmann Alpert.
Cross-referencing facial image data with social network data may cause greater privacy risk as it is possible to identify a user that appears in several video conference meetings and maliciously aggregate different information sources about the targeted individual.
Data extraction process
The research team offers a number of recommendations to prevent privacy and security intrusions. These include not posting video conference images online, or sharing videos; using generic pseudonyms like “iZoom” or “iPhone” rather than a unique username or real name; and using a virtual background vs. a real background since it can help fingerprint a user account across several meetings.
Additionally, the team advises video conferencing operators to augment their platforms with a privacy mode such as filters or Gaussian noise to an image, which can disrupt facial recognition while keeping the face still recognizable.
“Since organizations are relying on video conferencing to enable their employees to work from home and conduct meetings, they need to better educate and monitor a new set of security and privacy threats,” Fire says. “Parents and children of the elderly also need to be vigilant, as video conferencing is no different than other online activity.”
As the shift to remote work has increased, most businesses are embracing BYOD in the workplace.
In a survey by Bitglass, 69% of respondents said that employees at their companies are allowed to use personal devices to perform their work, while some enable BYOD for contractors, partners, customers, and suppliers.
While the use of personal devices in the work environment is growing rapidly, many are unprepared to balance security with productivity. When asked for their main BYOD security concerns, 63% of respondents said data leakage, 53% said unauthorized access to data and systems, and 52% said malware infections.
Lack of proper steps to protect corporate data
Despite the concerns, the research shows that organizations are allowing BYOD without taking the proper steps to protect corporate data. 51% of the surveyed organizations lack any visibility into file sharing apps, 30% have no visibility or control over mobile enterprise messaging tools, and only 9% have cloud-based anti-malware solutions in place.
Compounding these problems are results that demonstrated that organizations need physical access to devices and even device PINs to secure them. This may be acceptable for managed endpoints, but it is a clear invasion of privacy where BYOD is enabled.
“However, the reality is that today’s work environment requires the flexibility and remote access that the use of personal devices enables. To remedy this standoff, companies need comprehensive cloud security platforms that are designed to secure any interaction between users, devices, apps, or web destinations.”
We have all seen the carefully prepared statement. A cyber incident has occurred, we are investigating but please do not worry since no data has left our network. Perhaps we will also see the obligatory inclusion of a ‘sophisticated’ threat actor by way of explanation as to how the company protecting our data was able to be compromised.
This assertion is necessary since it can be critical in the light of regulatory fines, and for some time was a claim that was often used in public admittance of ransomware incidents.
Not any more.
Since late 2019, an evolving tactic to publicly demonstrate that not only were criminals inside a company’s network, but their unfettered access allowed them the opportunity to leave with data (which is regulated) began to emerge: the threat to leak sensitive content if ransom wasn’t paid. Indeed, such was the ferocity of the claims by victims, that the tactic was perceived as a way to extort more money.
This sadly of course has proven to be very successful and has led to multiple ransomware groups building similar capabilities and leak sites. According to Coveware for example, “nearly 9% of all cases it worked on involved ransomware attackers stealing and threatening to leak data.”
This represents a significant problem with the defence that data was not accessed.
Indeed, the very concept of a ransomware attack, or even any other type of cyber incident, needs to be considered not in isolation but potentially as part of a wider campaign. For example, a recent investigation into the use of Hermes ransomware drew the conclusion that it was a vehicle to make evidence gathering more difficult rather than extort money (since the financial systems themselves were already compromised).
This concept, that we originally cited as pseudo ransomware, began to emerge circa WannaCry, but particularly with NotPetya when ransomware payments did not result in the provision of a working decryption key. This of course is a conscious intent, as opposed to bad development from the criminal.
What this emergence represents is a level of innovation designed as a vehicle to extort larger payments, but moreover the terminology we use such as a ransomware attack is no longer accurate. These are breaches (and indeed often the initial entry vector points to this), and with data exfiltration now the modus operandi for many of the more capable criminal groups we must reconsider reframing our initial assertions.
This equally will extend beyond ransomware, to the DDoS attack which may have been a smokescreen while the ultimate purpose was to extort money from victims, or indeed any variety of threats.
As we consider how the threat landscape has changed, how we address and define each attack will become more critical to articulate the importance of cybersecurity. Simply denigrating something to a technical description fails to communicate the impact such campaigns have to a wider society. For example, the use of trolls to spread false information is more likely an attempt by a capable adversary to spread misinformation to influence the democratic process. A ransomware attack may be a direct attempt to cause a shutdown within an organization, to force a company or academic institution to pay seven figure sums in order to continue operations.
Cybersecurity (or infosec) is a critical function within our society and ensuring it is articulated as such is one of our biggest challenges.
In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the internet.
An AI algorithm will identify a cat in the picture on the left but will not detect a cat in the picture on the right
Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one’s photos together via the people present in those photos using Google’s own image recognition technology.
Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines.
Safeguarding sensitive information in photos
Led by Professor Mohan Kankanhalli, Dean of the School of Computing at the National University of Singapore (NUS), the research team from the School’s Department of Computer Science has developed a technique that safeguards sensitive information in photos by making subtle changes that are almost imperceptible to humans but render selected features undetectable by known algorithms.
Visual distortion using currently available technologies will ruin the aesthetics of the photograph as the image needs to be heavily altered to fool the machines. To overcome this limitation, the research team developed a “human sensitivity map” that quantifies how humans react to visual distortion in different parts of an image across a wide variety of scenes.
The development process started with a study involving 234 participants and a set of 860 images. Participants were shown two copies of the same image and they had to pick out the copy that was visually distorted.
After analysing the results, the research team discovered that human sensitivity is influenced by multiple factors. These factors included things like illumination, texture, object sentiment and semantics.
Applying visual distortion with minimal disruption
Using this “human sensitivity map” the team fine-tuned their technique to apply visual distortion with minimal disruption to the image aesthetics by injecting them into areas with low human sensitivity.
The NUS team took six months of research to develop this novel technique.
“It is too late to stop people from posting photos on social media in the interest of digital privacy. However, the reliance on AI is something we can target as the threat from human stalkers pales in comparison to the might of machines. Our solution enables the best of both worlds as users can still post their photos online safe from the prying eye of an algorithm,” said Prof Kankanhalli.
End users can use this technology to help mask vital attributes on their photos before posting them online and there is also the possibility of social media platforms integrating this into their system by default. This will introduce an additional layer of privacy protection and peace of mind.
The team also plans to extend this technology to videos, which is another prominent type of media frequently shared on social media platforms.
More and more companies, self-employed and private customers are using Boxcryptor to protect sensitive data – primarily in the cloud. Boxcryptor ensures that nobody but authorized persons have access to the data. Cloud providers and their staff, as well as potential hackers are reliably excluded. The audit verified whether this protection is guaranteed.
During the audit, Kudelski was given access to the source code of Boxcryptor for Windows and to the internal documentation.
“All these components were logically correct and did not show any significant weakness under scrutiny. It is important to note that the codebase we audited was not showing any signs of malicious intent.”
The goal of the audit
The goal of the audit was to give all interested parties an indirect insight into the software so that they can be sure that no backdoors or security holes are found in the code.
Robert Freudenreich, CTO of Boxcryptor, about the benefits of an audit: “For private users, Boxcryptor is a means of digital self-defense against curious third parties, for companies and organizations a way to achieve true GDPR compliance and complete control over business data. With software that is so security relevant, it is understandable that users want to be sure that the software is flawless.”
The audit process started at the beginning of May with short communication lines to the developers and managers in the Boxcryptor team. If Kudelski had found a serious security vulnerability, they would not have held it back until the final report, but would have reported the problem immediately.
A problem rated as “medium”
The problem rated as medium is a part of the code that affects the connection to cloud providers using the WebDAV protocol. Theoretically, the operators of such cloud storage providers could have tried to inject code into Boxcryptor for Windows.
In practice, however, this code was never used by Boxcryptor, so there was no danger for Boxcryptor users at any time. In response to the audit, this redundant part of the code was removed.
Two problems classified as “low” and further observations
One problem classified as low concerns the user password: to protect users with insecure passwords, it was suggested that passwords be hashed even more frequently and that the minimum password length be increased, which we implemented immediately.
The second problem classified as low was theoretical and concerned the reading of the Boxcryptor configuration.
Over the last few decades, as the information era has matured, it has shaped the world of cryptography and made it a varied landscape. Amongst the myriad of encoding methods and cryptosystems currently available for ensuring secure data transfers and user identification, some have become quite popular because of their safety or practicality.
For example, if you have ever been given the option to log onto a website using your Facebook or Gmail ID and password, you have encountered a single sign-on (SSO) system at work. The same goes for most smartphones, where signing in with a single username and password combination allows access to many different services and applications.
SSO schemes give users the option to access multiple systems by signing in to just one specific system. This specific system is called the “identity provider” and is regarded as a trusted entity that can verify and store the identity of the user. When the user attempts to access a service via the SSO, the “service provider” asks this identity provider to authenticate the user.
SSO advantages and privacy concerns
The advantages of SSO systems are many. For one, users need not remember several username and password combinations for each website or application. This translates into fewer people forgetting their passwords and, in turn, fewer telephone calls to IT support centers.
Moreover, SSO reduces the hassle of logging in, which can, for example, encourage employees to use their company’s security-oriented tools for tasks such as secure file transfer.
But with these advantages come some grave concerns. SSO systems are often run by Big Tech companies, who have, in the past, been reported to gather people’s personal information from apps and websites (service providers) without their consent, for targeted advertising and other marketing purposes.
Some people are also concerned that their ID and password could be stored locally by third parties when they provide them to the SSO mechanism.
A fast, privacy-preserving algorithm
In an effort to address these problems, Associate Professor Satoshi Iriyama from Tokyo University of Science and his colleague Dr Maki Kihara have recently developed a new SSO algorithm that on principle prevents such holistic information exchange. In their paper, they describe the new algorithm in great detail after going over their motivations for developing it.
Dr Iriyama states: “We aimed to develop an SSO algorithm that does not disclose the user’s identity and sensitive personal information to the service provider. In this way, our SSO algorithm uses personal information only for authentication of the user, as originally intended when SSO systems were introduced.”
Because of the way this SSO algorithm is designed, it is impossible in essence for user information to be disclosed without authorization. This is achieved, as explained by Dr Iriyama, by applying the principle of “handling information while it is still encrypted.”
In their SSO algorithm, all parties exchange encrypted messages but never exchange decryption keys, and no one is ever in possession of all the pieces of the puzzle because no one has the keys to all the information.
While the service provider (not the identity provider) gets to know whether a user was successfully authenticated, they do not get access to the user’s identity and any of their sensitive personal information. This in turn breaks the link that allows identity providers to draw specific user information from service providers.
The proposed scheme offers many other advantages. In terms of security, it is impervious by design to all typical forms of attack by which information or passwords are stolen. For instance, as Dr Iriyama explains, “Our algorithm can be used not only with an ID and a password, but also with any other type of identity information, such as biometrics, credit card data, and unique numbers known by the user.”
This also means that users can only provide identity information that they wish to disclose, reducing the risk of Big Tech companies or other third parties siphoning off personal information. In addition, the algorithm runs remarkably fast, an essential quality to ensure that the computational burden does not hinder its implementation.
Even before lockdowns, there was a steady migration toward more flexible workforce arrangements. Given the new normal of so many more people working from home—on top of a pile of evidence showing that productivity and quality of life typically go up with remote work—it is inevitable that many more companies will continue to offer those arrangements even as stay-at-home orders are lifted.
Unfortunately, a boom in remote access goes hand-in-hand with an increased risk to sensitive information. Verizon reports that 30 percent of recent data breaches were a direct result of the move to web applications and services.
Data is much harder to track, govern, and protect when it lives inside a cloud. In large part, these threats are associated with internet-exposed storage.
Emerging threat matrix
Traditionally, system administrators rely on perimeter security to stop outside intruders, yet even the most conscientious are exposed after a single missed or delayed update. Beyond that, insiders are widely considered the biggest threat to data security.
Misconfiguration accounts for the vast majority of insider errors. It is usually the result of failure to properly secure cloud storage or firewall settings, and largely relates to unsecured databases or file storage that are directly exposed on a cloud service.
In many cases, employees mislabel private documents by setting storage privileges to public. According to the Verizon report, among financial services and insurance firms, this is now the second most common type of misconfiguration error.
Addressing this usually means getting open sharing under control, figuring out where sensitive data resides and who owns it, and running a certificate program to align data access with organizational needs.
Optimistically, companies hope that a combination of technological safeguards and diligence on the part of users—whether employees, partners, or customers—will eliminate, or at least minimize, costly mistakes.
Other internal threats come as a part of a cloud migration or backup process, where a system admin or DBA will often stand up an instance of data on a cloud platform but fail to put inconvenient but necessary access controls in place.
Consider the example of cloud data warehouses. Providers such as Amazon, Google, and Snowflake now make it simple to store vast quantities of data cheaply, to migrate data easily, and to scale up or down at will. Little wonder that these services are growing so quickly.
Yet even the best services need some help when it comes to tracking data access. Some tools makes it easy to authenticate remote users before letting them inside the gate of the cloud data warehouse. After that, though, things often get murky. Who is accessing which data, how much of it, when, and from where?
These are issues that every company must confront. That data is ripe for exploitation by dishonest insiders, or by careless employees, with serious consequences. In more fortunate circumstances, it is discovered by security teams, or by management who make an irate call to the CISO.
Born in the cloud
More approaches to data security that are born in the cloud are now appearing, and the new normal means the enterprise is motivated to adapt. As most organizations turn to the cloud for what used to be on-premises IT deployments, the responsibility and techniques to secure the infrastructure and applications that hold data are also being moved to the cloud.
For instance, infrastructure-as-a-service (IaaS) provides virtualized computing resources like virtual firewalls and network security hardware, and virtual intrusion detection and prevention, but these are an intermediate step at best.
The idea is that IaaS can offer a set of defenses at scale for all of a cloud provider’s customers, built into the platform itself, which will relieve an individual cloud customer from having to do many of the things that used to be on-premises data-protection requirements.
But what has really changed? A top certification may be enough to be called “above average” data security, but in reality that security still remains totally contingent on perimeter defenses, hardware appliances, and proper configurations by system administrators and DBMs. And it’s still only as good as the data hygiene of end users. There are a lot of “ifs” and “buts,” which is nothing new.
Data Security-as-a-Service (DSaaS) complements IaaS as it integrates data protection at the application layer. This places data access services in the path between users who want data and the data itself. It is also portable because it goes where the application goes.
Developers can embed data access governance and protection into applications through a thin layer of technology wrapped around database drivers or APIs, which all applications use to connect to their databases. An obvious advantage is that this is more easily maintained over time.
Data security is a shared responsibility among security pros, end users, and cloud providers. As the new normal becomes reality, shared responsibility means that a cloud provider handles the underlying network security such that the cloud infrastructure ensures basic, customer-level network isolation and secure physical routers and switches.
From here, under the DSaaS model the cloud service provider offers DSaaS—or else the customer provisions it through a third party—as a set of automated data security components that complete a secure cloud environment.
This makes it possible to govern each user at a granular level so that they access only the types of data they should, and perform only those actions with the data for which they are authorized. CISOs can implement and adapt rulesets to govern the flow of data by type and role. In terms of data protection, application-layer data security makes it possible to isolate and block bad traffic, including excessive data volumes, down to an individual user.
From this perspective, DSaaS can act as both an intrusion detection system (IDS) and intrusion prevention system (IPS). It can inspect data access and analyze it for intrusion attempts or vulnerabilities in workload components that could potentially exploit a cloud environment, and then automatically stop data access in progress until system admins can look into the situation.
At this level it is also feasible to log data activity such as what each user does with the data they access, satisfying both security and compliance—a notable accomplishment, considering that the two functions are often at odds with one another.
Incorporating security at the application layer also offers data protection capabilities that are similar to network intrusion appliances, or security agents that reside at the OS level on a virtual machine or at the hypervisor level.
Moreover, DSaaS governance and protection is so fine-grained that it does not inhibit traffic flow, data availability, and uptime even in the face of multiple sustained attacks.
Everyone is talking about how the “new normal” is impacting data security, but the enterprise was well on this path before the pandemic. It is tempting for vigilance to give rise to pessimism since data security has too often been a laggard, and an inventory of the cloud data-security bona fides of most companies is not encouraging.
However, data protection and governance can be assured should we adopt shared models for responsibility and finely tuned, application-level controls. It’s a new world and we can be ready for it.
Working remotely from home has become a reality for millions of people around the world, putting pressure on IT and security teams to ensure that remote employees not only remain as productive as possible, but also that they keep themselves and corporate data as secure as possible.
Achieving a balance between productivity and security is even harder, given that most organizations do not have adequate visibility or control over what their employees are doing on corporate owned smartphones and laptops while outside the office. Even less so in the case of BYOD.
Remote workers attempting to access risky content
NetMotion recently aggregated a sample of anonymized network traffic data, searching specifically for evidence of users attempting to access flagged (or blocked) URLs, otherwise known as risky content. The analysis, which is derived from data gathered between May 30th – June 24th, 2020, revealed that employees clicked on 76,440 links that took them to potentially dangerous websites.
All of these sites were visited on work-assigned devices while using either home or public Wi-Fi or a cellular network connection. The data also revealed several primary risk categories, which were identified using machine learning and based on the reputation scores of over 750 million known domains, more than 4 billion IP addresses and in excess of 32 billion URLs.
The assumption is that a large number of employees connected to protected internal (non-public) networks would have been prevented from accessing this risky content.
- Employees, on average, encounter 8.5 risky URLs per day, or 59 per week
- Remote workers also access around 31 malware sites per month, and 10 phishing domains. That equates to one malware site every day, and one phishing domain every 3 days
- The most common types of high-risk URLs encountered, in order of prevalence, were botnets, malware sites, spam and adware, and phishing and fraud sites
- Over a quarter of the high risk URLs visited by employees were related to botnets
- Almost 1 in 5 risky links led to sites containing spam, adware or malware
- Phishing and fraud, which garner an outsized proportion of news, account for only 4% of the URLs visited
- The ‘other’ category, representing 51% of the data in the chart above, is made up of ‘low-severity’ risky content, such as websites that use proxies, translations and other methods that circumvent URL filtering or monitoring.
2020, a wake-up call for the enterprise and the IT and security teams
IT and security organizations invest heavily to protect their perimeter. Workers located behind desks that are connected to corporate networks are generally safe, secure and productive. They are often unaware that several layers of technology, such as firewalls, are in place to protect them.
With the world continuing to shift to a more mobile and remote environment, 2020 has been a wake-up call for the enterprise and the IT and security teams that support it.
“As this research highlights, remote workers are frequently accessing risky content that would normally be blocked by firewalls and other security tools that monitor internal network traffic. Naturally, this poses an enormous threat to the enterprise,” said Achi Lewis, EMEA Director, NetMotion Software.
“Added to this, many organizations have no visibility into the activity taking place on external networks, let alone any means to prevent it. With such a rapid shift to remote work, enterprise security teams have been left flat-footed, unable to adequately protect users in the face of increasingly sophisticated cyberattacks.”
As a result, security leaders need to look to SDP and other edge-to-edge security technologies that can provide web filtering on any network as they seek to evolve outdated network security strategies.
Security applications are subject to the age-old computing axiom of “garbage in, garbage out.” To work effectively, they need the right data. Too much irrelevant data may overwhelm the processing and analytics of solutions and the results they deliver. Too little, and they may miss something crucial. It’s mainly a question of relevance, volume and velocity.
How much data?
One of the most central questions, then, is how much data is enough? What is the correct balance? This is an issue that should not merely be reached through a compromise, but instead represent the optimal level of what is needed. Understanding this level is not merely a question of amount of data but also one of how the data will be used, the level or granularity of the data, the processing performed by the solution and the quality of algorithms or machine learning capabilities, if employed.
At a minimum, there needs to be full coverage or representation for the particular security task. For instance, if the object is to find an attacker who may have gained access to a network and is quietly conducting reconnaissance and lateral movements to gain access to assets, one may be primarily concerned with data center traffic and activity within the corporate network.
Inbound or outbound internet traffic may be less of a concern, unless one is specifically focused on command and control or exfiltration activities, which tend to be inferior vectors for finding attack activity early.
If one is more focused on access control and remote users, access to VPN traffic is essential to the security solution. Other solutions may be more focused on traffic from the internet to detect a malware attack. The object is to look in the right place and ensure that there is not a blind spot limiting a solution’s ability to make the right assessments or produce the proper actions. If a security solution cannot see or lack access to parts of relevant areas of network traffic, it will not be able to properly serve its function.
The next consideration is what data is actually needed. Many solutions need only packet header information, while others need to fully inspect each packet. Obviously, working with metadata is far less burdensome than deep packet inspection. Metadata is “data on data” information that may provide the proper level of expected flow behavior information or telemetry. IPFIX or NetFlow offers a wealth of critical details and may be sufficient for the needs of a security solution. Full packet ingestion should only be performed when necessary and header or flow details will not tell the entire story.
Network packet brokers have provided security solutions with network traffic from the right locations and the right level of detail for some time now. Some of these have been able to host security solutions in virtual environments or simply pass traffic to standalone appliances. Often the solutions have firm limits on the applications they support, typically from an approved list of technology partners.
Requiring certified or approved vendors may help ensure compatibility or even some kind of optimization, but they also impose a lack of flexibility and openness. It’s likely that compatibility is not a real issue and more vendors should favor at least some more openness.
Ideally, the network packet broker would support any or all security solutions, provide traffic from the relevant portions of the network and perform all of the processing necessary for the solution to be able to do its particular job. Security groups would benefit from being able to easily deploy the latest technologies and more appropriate solutions for their needs. In addition, unburdening each security solution from utility processing, such as TLS decryption, metadata extraction, IPFIX generation and other chores can help ensure not only that solutions get the right traffic in the right way but also that they are not slowed down with performance-draining tasks.
This kind of cooperation between a network packet broker and a security solution can help each with the network traffic they need to operate effectively without producing a burden on each solution. Each solution then needs to work efficiently and effectively on the traffic using algorithms that are actually tuned to the data. It’s one thing to have algorithms that logically meet the requirements of a security task, but it is another to hone those algorithms to more effectively fit the data. These streamlined algorithms make best use of data to achieve better fidelity and accuracy.
Security solutions need to keep evolving, but the way they are deployed also needs to evolve. Fixing these two issues will advance an organization’s ability to meet ever-increasing challenges.
Cooperation is required
Organizations have been plagued by security solutions that either completely miss the types of threats they are intended to find or overwhelm security teams with so many alerts that finding a real indication of trouble is like finding a needle in a haystack. Data breaches continue to be a persistent threat. More company intellectual property and secrets are stolen at alarming rates to the extent that a country’s GDP actually suffers. Ransomware is still ramping up. Can network security achieve a new level of effectiveness?
The deficiency is not completely the fault of deficient security solutions. Network infrastructure should take some of the responsibility. Often times the missing of threats or the flood of alerts comprised chiefly of false positives is a result of a garbage in, garbage out data issue—not having the right traffic, being overloaded with irrelevant traffic, drowning in too much detail or not having enough.
Sometimes traffic processing utility functions bog down a security solution’s performance and requires compromise in how much data can be ingested. Such conditions can and should be remedied by new generations of network packet brokers that better serve security solutions. The two need to work more cooperatively—and effectively.
With more cooperation and partnership between security solutions and network infrastructure, network security can advance and better meet the growing challenges and needs of organizations. Perhaps the garbage in, garbage out issues can finally be put to rest and discarded in the trash.
The rapid increase in cyberattacks and pressures escalating from changes prompted by COVID-19 have shifted consumer behavior. The findings of a report by the World Economic Forum outline core cybersecurity principles and point to how companies and investors must significantly reduce cyber risk to remain competitive.
“There is a serious imbalance between the “time to market” pressures and the “time to security” requirements for shiny new products and gadgets,” said Algirde Pipikaite, Cybersecurity Lead, World Economic Forum.
“With the rapid increase of cyberattacks, companies need to prove to consumers the security of their data. As the market shifts due to the rapid technology adoption in all spheres of our lives, we expect to see more investment in companies prioritizing security and their longer-term success.”
The cyber essentials
The cyber essentials include explicit core principles and requirements for new companies and products. They represent the most important requirements that, if implemented, will provide a robust cybersecurity framework encompassing organizational, product and infrastructure security.
“We see two types of early-stage companies: the ones that treat cybersecurity as a checkbox compliance issue, and the ones that understand that it is fundamental to maintain the trust of clients”, said Craig Froelich, CISO at Bank of America.
“If an emerging company fully commits to cybersecurity, then its commitment will be rewarded by market confidence and consumer trust.”
The cyber essentials need to be tailored to an organization’s size, nature and type of product. The report details each, followed by practical steps for their implementation and guidance for investors on how to validate them.
“As digitalization spreads across the global economy and within businesses, we typically focus on incidents of significant impact, on larger organizations covered by the media, or businesses where regulators require certain levels of disclosure,” said Martina Cheung, President of S&P Global Market Intelligence.
“We should not forget, however, that entrepreneurs are typically small and medium-sized enterprises (SME) and that SMEs represent about 90% of businesses and more than 50% of employment worldwide. Cyber-related incidents could have a dramatic impact on their survival. An overwhelming majority of executives continue to be largely dissatisfied with the effectiveness of their cybersecurity spending, often all too myopically focused on the newest technologies,” said Benjamin Haddad, Director, Accenture Ventures and a contributor to the report.
”A strategic trade-off needs careful consideration to benefit fully from the combined power of cyber innovation, while minimizing the threat and enabling the people to perform effectively.”
With the economy and society growing ever more dependent on technology and particularly so in the COVID-19 pandemic, the security and privacy of our digital tools are more important than ever.
31% of Americans are concerned about their data security while working from home during the global health crisis, according to a Unisys Security survey.
Consumer security concerns
The survey found that overall concerns around internet security (including computer viruses and hacking) have plunged in the last year, falling 13 points from 2019 and ranking the lowest among the four primary areas of security surveyed for the first time since 2010.
According to the FBI, online crimes reported to the IC3 have increased by 400% as a result of the pandemic, with as many as 4,000 incidents per day.
The survey also found that most Americans (70%) are not concerned about the risk of being scammed during or about the health crisis. This lack of concern was even more stark compared to the rest of the world, as Americans were 24% less likely to report concern about a data breach during the pandemic as compared to the global average.
Americans were much more likely to be concerned about their country’s economic stability, with 60% registering serious concern (extremely or very concerned), and the stability of the country’s health infrastructure, with 55% extremely or very concerned.
Personal safety concerns rise to the top
The survey also asked U.S. respondents about their concerns related to personal security, national security and financial security.
Not surprisingly, concerns around personal safety and natural disasters and epidemics increased by 17% and 6% from 2019, respectively; however, that was met with a stark drop in concerns over national security, which saw a 19% decrease from 2019.
“It’s not surprising to see people’s level of concern for their personal safety jump in light of the global health crisis. However, the fact that it is not only matched by, but exceeded by, a drop in concerns around hacking, scamming or online fraud reflects a false sense of consumer security,” said Unisys CISO Mat Newfield.
“Hackers target healthcare and essential services organizations looking to steal intellectual property and intelligence, such as details on national health policies and COVID-19 research.
“And hackers are relying on tricks like ‘password spraying,’ which involves an attacker repeatedly using common passwords on many accounts to gain access, putting our most critical infrastructures at risk potentially from the click of a single working-from-home employee.
“This underscores the need for businesses to ensure they are placing a clear and concerted emphasis on proper training for their employees working from home and adopting a zero trust security architecture that leverages best practices like encryption and microsegmentation.”
Demographic differences take shape
More than 15,000 consumers in 15 countries were surveyed, including more than 1,000 in the U.S., in March and April 2020. On a scale of zero to 300, with 300 representing the highest level of concern, the U.S. index is now at 159, a serious level of concern and the second-highest among the nine developed countries surveyed.
Notably, the survey found that security concerns in all countries are higher among women, younger people and those with lower incomes. In the U.S., the survey found concern was 12 points higher among women than men and 13 points higher among 18-to-24-year-olds than respondents aged 55 to 65.
The level of concern for U.S. respondents with lower incomes was 14 points higher than higher-income respondents.
“The survey shines a spotlight on the significant ways that COVID-19 has impacted everyone, especially women, young adults and those with lower incomes,” said Unisys CMO Ann Sung Ruckstuhl.
“According to the U.S. Census Bureau, nearly half of adults 18 and over have either lost employment income or another adult in their household has lost employment income since the beginning of the pandemic.
“For many women, particularly those with children at home, the pandemic has only magnified the challenges they have long been facing as they juggle career and family.”
The Cloud Security Alliance has released a report examining privacy and security of patient data in the cloud.
In the wake of COVID-19, health delivery organizations (HDOs) have quickly increased their utilization of telehealth capabilities (i.e., remote patient monitoring (RPM) and telemedicine) to treat patients in their homes. These technology solutions allow for the delivery of patient treatment, comply with COVID-19 mitigation best practices, and reduce the risk of exposure for healthcare providers.
Remote healthcare comes with security challenges
Going forward, telehealth solutions — which introduce high levels of patient data over the internet and in the cloud — can be used to remotely monitor and treat patients who have mild cases of the virus, as well as other health issues. However, this remote environment also comes with an array of privacy and security challenges.
“For health care systems, telehealth has emerged as a critical technology for safe and efficient communications between healthcare providers and patients, and accordingly, it’s vital to review the end-to-end architecture of a telehealth delivery system,” said Dr. Jim Angle, co-chair of CSA’s Health Information Management Working Group.
“A full analysis can help determine whether privacy and security vulnerabilities exist, what security controls are required for proper cybersecurity of the telehealth ecosystem, and if patient privacy protections are adequate.”
The HDO must understand regulations and technologies
With the increased use of telehealth in the cloud, HDOs must adequately and proactively address data, privacy, and security issues. The HDO cannot leave this up to the cloud service provider, as it is a shared responsibility. The HDO must understand regulatory requirements, as well as the technologies that support the system.
Regulatory mandates may span multiple jurisdictions, and requirements may include both the GDPR and HIPAA. Armed with the right information, the HDO can implement and maintain a secure and robust telehealth program.
TrustArc announced the results of its survey on how organizations are protecting and leveraging data, their most valuable asset. The survey polled more than 1,500 respondents from around the world at all levels of the organization.
“The TrustArc survey highlights just how difficult it can be to comply with even a single new regulation, such as CCPA, let alone the entire list of existing laws. The results also show how the COVID-19 pandemic and its attendant technologies, such as video conferencing, have exacerbated an already difficult privacy challenge and forced respondents to rethink their approaches.”
CCPA compliance readiness mostly lacking, prior GDPR preparedness a boost
29% of respondents say they have just started planning for CCPA.
- More than 20% of respondents report they are either somewhat unlikely to be, very unlikely to be, or don’t know if they will be fully compliant with CCPA on July 1.
- Just 14% of respondents are done with CCPA compliance. Nine percent have not started with CCPA compliance, and 15% have a plan but have not started implementation.
- Of respondents who reported as being slightly or very knowledgeable about CCPA and GDPR regulations, 82% are leveraging at least some of the work they did for GDPR in implementing CCPA requirements.
Privacy professionals still use inefficient technologies for compliance programs
Though 90% of respondents agree or strongly agree that they are “mindful of privacy as a business,” many privacy professionals are left building privacy programs without automation.
- 19% of respondents report they are most deficient in automating privacy processes.
- Just 17% of all respondents have implemented privacy management software, which matches the 17% who are still using spreadsheets and word processors.
- In addition, 19% are using open source/free software and 9% are doing nothing.
- Even in the U.S., which boasts the highest rate of privacy management software adoption, just 22% of respondents use privacy management software as their primary compliance software.
Respondents understand the importance of data privacy and continue to invest in ongoing privacy programs. However, many are still attempting to implement these programs using manual processes and technologies that do not offer automation.
Moving forward, the companies that can leverage automation to simplify data privacy can protect their most valuable asset—data—and use it to drive business growth.
New technologies present additional challenges to compliance
With the move to all-remote workforces, companies are increasingly turning to technologies, such as video conferencing and collaboration tools. These tools present new avenues for data creation that privacy professionals must consider in their company-wide plans.
- Twenty-two percent of respondents said personal device security during the pandemic has added a great deal of risk to their businesses. “Personal device security” received the highest proportion of “a great deal of risk” responses, compared to the other four response options.
- A majority of respondents said that third-party data, supply chain, personal-device security, unintentional data sharing, and required or voluntary data sharing for public health purposes all added at least a moderate amount of risk to their businesses.
- Seventy percent of respondents say video conferencing tools have required a moderate or great change to their privacy approach, and 65% of respondents say collaboration tools have required a moderate or great change to privacy approaches.
Despite financial impact of pandemic, privacy compliance remains a high priority
Though many respondents expect a significant decrease in their company’s revenues as a result of the COVID-19 pandemic, they are still prioritizing privacy-related investments.
- Forty-four percent of companies expect a decrease or steep decrease in overall company revenues for the balance of 2020 as a result of COVID-19.
- Just 15% of respondents report they plan to spend less or a great deal less on privacy efforts in 2020 as a result of the pandemic.
- 42% of respondents plan to spend $500,000 or more in 2020 on CCPA efforts alone.
Boards of directors actively involved in privacy management
The mandate for increased privacy investments is coming from the very top of organizations.
- Eighty-three percent of respondents indicate their board of directors regularly reviews privacy approaches.
- An impressive 86% of respondents say that everyone from the board of directors to the front-line staff knows their role in protecting privacy.
- Four out of five respondents view privacy as a key differentiator for their company.
The PCI Security Standards Council has updated the standard for payment devices to enable stronger protections for cardholder data.
Meeting the accelerating changes of payment device technology
The PCI PIN Transaction Security (PTS) Point-of-Interaction (POI) Modular Security Requirements 6.0 enhances security controls to defend against physical tampering and the insertion of malware that can compromise card data during payment transactions.
Updates are designed to meet the accelerating changes of payment device technology, while providing protections against criminals who continue to develop new ways to steal payment card data.
“Payment technology is advancing at a rapid pace,” says Emma Sutcliffe, SVP, Standards Officer at PCI SSC. “The changes to this standard will facilitate design flexibility for payment devices while advancing the standard to help mitigate the evolving threat environment.”
Established to protect PINs and the cardholder data stored on the card (on magnetic stripe or the chip of an EMV card) or used in conjunction with a mobile device, PTS POI Version 6.0 reorganizes the requirements and introduces changes that include:
- Restructuring modules into Physical and Logical, Integration, Communications and Interfaces, and Life Cycle to reflect the diversity of devices supported under the standard and the application of requirements based upon their individual characteristics and functionalities.
- Limiting firmware approval timeframes to three years to help ensure ongoing protection against evolving vulnerabilities.
- Requiring devices that accept EMV enabled cards to support Elliptic Curve Cryptography (ECC) to help facilitate the EMV migration to a more robust level of cryptography.
- Enhancing support for the acceptance of magnetic stripe cards in mobile payments using solutions that follow the Software-Based PIN Entry on COTS (SPoC) Standard.
“Feedback from our global stakeholders, along with changes in payments, technology and security is driving the changes to this standard,” said Troy Leach, SVP at PCI SSC. “It’s with participation from the payments industry that the Council is able to produce standards that are relevant and enhance global payment card security.”
The vast majority of SMBs both expect the unexpected and feel that they’re ready for disaster – though they may not be, Infrascale reveals.
Ninety-two percent of SMB executives said they believe their businesses are prepared to recover from a disaster. However, as previously reported, more than a fifth of SMB leaders said they don’t have a data backup or disaster recovery solution in place.
The research also indicates that 16% of SMB executives admitted they do not know their own Recovery Time Objectives (RTOs), although 24% expect to recover their data in less than 10 minutes after a disaster and 29% expect to do so in under one hour following a disaster. An RTO is the time between the start of recovery to the point at which all of an organization’s infrastructure and services are available.
Survey results also highlight that there’s no common understanding of disaster recovery and that expectations around disaster recovery solution results and recovery times differ by industry. There is also sector variation in why businesses that feel unprepared for disaster remain unprepared.
“The latest results from our survey are quite surprising, as they suggest that most SMBs think they are prepared to recover their data and be back up and running after a disaster. Yet more than one in five of those same respondents said they do not have a disaster recovery or backup solution in place,” said Russell P. Reeder, CEO of Infrascale.
“That data suggests that there are either varying definitions of what it means to be able to recover from a disaster or, quite simply, a lack of understanding of what it truly means to be able to recover from a disaster. Make no mistake, if a business does not have a disaster recovery solution in place, or at the very least a solution to back up its data, there is no way it can get the data back from a data loss event.”
The research is based on a survey of more than 500 C-level executives at SMBs. CEOs represented 87% of the group. Almost all of the remainder was split between CIOs and CTOs.
A gap between expectation and reality
While 84% of the total SMB survey group said they are aware of their organizations’ RTO, the rest revealed that they are not. More business-to-consumer (B2C) company leaders are in the dark about their organizations’ RTOs than business-to-business (B2B) C-level executives. 22% admitted they do not know their RTOs, while 10% of B2B leaders said they lack such knowledge.
Of those who were able to state their RTOs, 9% said they have an RTO of one minute or less. 30% said they have an RTO of under an hour. And 17% said they have an RTO of one day. But expectations are clearly not the reality in this scenario without redundancy, automation, and a substantial budget to pay for it.
The research also analyzed RTO from an industry vertical perspective. It found that 26% of telecommunications leaders said their RTO was 10 minutes. This was the No. 1 answer for this sector.
Meanwhile, the top answer of executives in the accounting/finance/banking and retail/e-commerce sectors said their RTO was under an hour, with this answer getting 36% and 29% of the votes, respectively. The No. 1 answer for healthcare, garnering a 35% share from this sector, was an RTO of one day.
“Having a low RTO can be achieved one of two ways: you either have redundant, highly automated infrastructure or an expensive disaster recovery solution. If you’re willing to trade just a little amount of time for cost, you can achieve a reasonable RTO with an affordable disaster recovery solution,” said Reeder.
“Every industry uses technology differently to achieve their business goals, which in turn will have a different requirement around the redundancy and availability of their systems. While it may be possible to have an RTO of less than one minute if you implement redundant systems, those costs usually outweigh the benefits.”
When business leaders were asked how long they expect it will take to recover their data after a disaster, 24% of the total group and a 33% of telecommunications executives said under 10 minutes. Thirty-eight percent of the accounting/finance/banking group and 31% of retail/e-commerce leaders said under one hour.
Disaster recovery has a range of definitions and industry vertical viewpoints
The one thing that everyone can agree on is that disaster recovery is needed in multiple scenarios. Fifty-eight percent of the total survey group said disaster recovery means recovering data after data loss and 55% said it involves recovery from a malware attack. 54% said disaster recovery provides the ability to become operational quickly after a disaster.
“The fact that 58% of the survey group said disaster recovery means getting data back after a loss, yet one in five say they don’t have a solution in place to do this, and most SMBs still believe they are prepared to recover for a disaster does not add up,” said Reeder.
“It highlights the need for SMBs to do detailed assessments on their true disaster recovery readiness or face the very real risk of being totally unprepared in the unfortunate but ever-present event of a disaster.”
The telecommunications sector survey group most commonly described disaster recovery as recovering data after data loss, with 59% of these respondents voicing this opinion. The healthcare (68%) and retail/e-commerce (66%) groups indicated that they see disaster recovery primarily as the ability to become operational quickly after a disaster.
Meanwhile, 56% of SMBs in the accounting/finance/banking sector defined disaster recovery as the ability to recover from a natural disaster like a hurricane or tornado.
74% of retail/e-commerce and 73% of healthcare industry executives said their top expectation of a disaster recovery solution is to minimize the time until their business is fully operational following a disaster.
Sixty-four percent of accounting/finance/banking sector leaders said zero data loss is their top expectation of a disaster recovery solution. Telecommunications leaders indicated their top expectation of a disaster recovery solution is to deliver cost savings related to on-call IT technicians, with 63% providing that answer.
“Every business is unique. But one thing all organizations and sectors have in common is the need to eliminate downtime and data loss,” said Reeder.
“Whether a business is dealing with a server crash or a site-wide disaster, unplanned downtime comes with serious consequences. Businesses can dramatically reduce downtime, quickly recover from ransomware attacks, and avoid paying ransoms by employing disaster recovery as a service.
“SMBs also can get ahead of an anticipated disaster such as a hurricane by failing over to their disaster recovery solution before the disaster is expected to hit, completely mitigating any downtime.”
Different industry sectors provide different reasons for their lack of preparedness
Most SMBs expressed the belief that they are prepared to recover from disaster, but 8% admitted they do not feel they are ready to bounce back from one. Of this latter group, 39% said they don’t have the budget to prepare to recover from a disaster.
Thirty-seven percent said they are unprepared because they have limited time to research solutions. 32% said they are not prepared because they lack the right resources. 27% said they don’t have the technology in place to recover from a disaster.
Healthcare (67%) and business-to-consumer entities (48%) both said the top reason their organizations are not prepared to recover from a disaster is that they have limited time to research solutions. 50% of the SMBs in the accounting/finance/banking group said their businesses are not prepared because their IT teams are stretched.
The top answers from the business-to-business survey group regarding lack of preparedness to recover from a disaster were that they don’t have the right resources or the budget. Both answers garnered 31% of the vote from business-to-business organizations.
“This survey data highlights how important it is for businesses to understand and address their disaster recovery risks before it’s too late,” said Reeder.
Most SMBs have faced micro-disasters in the past year
Yet for any differences among industry sectors, one thing all SMBs seem to have in common is suffering from malware infections, corrupted hard drives, and/or other micro-disasters. 51% of the survey group said they had faced such events within the past year.
B2B entities were more likely than B2C organizations to have been subjected to such scenarios. While 41% of B2C organizations have experienced a micro-disaster in the past year, 59% of B2B entities admitted they have had to face such a situation.
22% of the total survey group said they have experienced a micro-disaster more than once within the past year. 24% of B2B organizations said they have had such repeat experiences, while micro-disasters have hit 20% of B2Cs in the past year.