Google forces devs to reveal Chrome extensions’ data use, privacy practices

Starting January 2021, developers of Chrome extensions will have to certify their data use and privacy practices and provide information about the data collected by the extension(s), “in clear and easy to understand language,” in the extension’s detail page in the Chrome Web Store.

“We are also introducing an additional policy focused on limiting how extension developers use data they collect,” Google added.

Privacy practices get more attention

Two weeks ago Apple announced that developers of apps offered trough its App Store will have to provide privacy-focused labels so that users can review an app’s privacy practices before they download the app.

Chrome extensions privacy

“You’ll need to provide information about your app’s privacy practices, including the practices of third-party partners whose code you integrate into your app, in App Store Connect,” Apple told app developers. “This information will be required to submit new apps and app updates to the App Store starting December 8, 2020.”

Now Google is forcing developers to provide similar information for Chrome extension and, at the same time, the company is updating its developer policy to limit what extension developers can do with the data they collect.

The change means that extension developers are prohibited from selling user data, using it for personalized advertising or to establish users’ creditworthiness / lending qualification, transferring the data to data brokers or other information resellers. In addition to this, they must ensuring the use or transfer of user data primarily benefits the user and is in accordance with the stated purpose of the extension.

The privacy-related information will be shown in the Privacy practices tab of the extension’s Chrome Web Store listing:

Chrome extensions privacy

Will this be enough?

If developers fail to provide data privacy disclosures and to certify they comply with the Limited Use policy, starting with January 18, 2021, their listing on the Chrome Web Store will say that the publisher has not provided any information about the collection or usage of user data (but the extension apparently won’t be pulled from the store).

Will this stop users from downloading such an extension? Will most users actually read the information provided in the Privacy practices tab? Unfortunately, the answer to these questions is no. Does Google check whether extension developers were truthful when they “certified” their data use practices? Google doesn’t say, but the answer is likely no, as the task would be massive and the claims difficult (if not impossible) to confirm at that scale.

The problem with Apple’s and Google’s latest app privacy transparency push is that the companies shift the responsibility on app/extension users and developers, and that the sanctions for developers who don’t comply with the store policies are not enough to stop those that are set on abusing them.

Making history: The pandemic, disaster recovery and data protection

It was an accomplishment for the ages: within just a couple of days, IT departments hurriedly provided millions of newly homebound employees online access to the data and apps they needed to remain productive.

pandemic disaster recovery

Some employees were handed laptops as they left the building, while others made do with their own machines. Most connected to their corporate services via VPNs. Other companies harnessed the cloud and software and infrastructure services (SaaS, IaaS).

Bravo, IT! Not only did it all work, businesses and employees both saw the very real benefits of remote life, and that egg is not going back into the shell. Many won’t return to those offices and will continue work from home.

But while immediate access challenges were answered, this was not a long-term solution.

Let’s face it, because of the pandemic a lot of companies were caught off guard with insufficient plans for data protection and disaster recovery (DR). That isn’t easy in the best of times, never mind during a pandemic. Even those with effective strategies now must revisit and update them. Employees have insufficient home security. VPNs are difficult to manage and provision, perform poorly and are hard to scale. And, IT’s domain is now stretched across the corporate data center, cloud (often more than one), user endpoints and multiple SaaS providers.

There’s a lot to do. A plan that fully covers DR, data protection and availability is a must.

Local focus

There are several strategies for protecting endpoints. First off, if employees are using company-issued machines, there are many good mobile machine management products on the market. Sure, setting up clients for a volume of these will be a laborious task, but you’ll have peace of mind knowing data won’t go unprotected.

Another strategy is to create group policies that map the Desktop and My Documents folders directly to the cloud file storage of your choice, no matter if it’s Google Drive, OneDrive, Dropbox or some other solution. That can simplify file data protection but its success hinges on the employee storing documents in the right place. And if they keep them on their desktop, for example, they’re not going to be protected.

And right there is the rub with protecting employee machines – employees are going to store data on these devices. Often, insecure home Internet connections make these devices and data vulnerable. Further, if you add backup clients and/or software to employee-owned machines, you could encounter some privacy resistance.

Remote desktops can provide an elegant solution. We’ve heard “this is the year of virtual desktop infrastructure (VDI)” for over a decade. It’s something of a running joke in IT circles, but you know what? The current scenario could very well make this the year of remote desktops after all.

VDI performance in more sophisticated remote desktop solutions has greatly improved. With a robust platform configured properly, end-users can’t store data on their local machines – it’ll be safely kept behind a firewall with on-premises backup systems to protect and secure it.

Further, IT can set up virtual desktops to prevent cut and paste to the device. And because many solutions don’t require a client, it doesn’t matter what machine an employee uses – just make sure proper credentials are needed for access and include multi-factor authentication.

Pain in the SaaS

As if IT doesn’t have enough to worry about, there’s a potential SaaS issue that can cause a lot of pain. Most providers operate under the shared responsibility model. They secure infrastructure, ensure apps are available and data is safe in case of a large-scale disaster. But long-term, responsibility for granular protection of data rests on the shoulders of the customer.

Unfortunately, many organizations are unprepared. A January 2020 survey from OwnBackup of 2,000 Salesforce users found that 52% are not backing up their Salesforce data.

What happens if someone mistakenly deletes a Microsoft Office 365 document vital for a quarterly sales report and it’s not noticed for a while? Microsoft automatically empties recycle bins data after 30 days, so unless there’s backup in place, it’s gone for good.

Backup vendors provide products to protect data in most of the more common SaaS services, but if there’s not a data protection solution for one your organization is using, make data protection part of the service provider’s contract and insist they regularly send along copies of your data.

Making history

When it comes to a significant disaster, highly distributed environments can make recovery difficult. The cloud seems like a clear choice for storing DR and backup data, but while the commodity cloud providers make it easy and cheap to upload data, costs for retrieval are much higher. Also, remember that cloud recovery is different from on-prem, requiring expertise in areas like virtual machines and user access. And, if IT is handling cloud directly and has issues, keep in mind that it could be very difficult getting support.

During a disaster, you want to recover fast; you don’t want to be creating a backup and DR strategy as the leadership grits their teeth due to downtime. So, set your data protection strategy now, be sure each app is included, follow all dependencies and test over and over again. Employees and data may be in varied locations, so be sure you’re completely covered so your company can get back in the game faster.

While IT pulled off an amazing feat handling a rapid remote migration, to ensure your company’s future, you need to be certain it can protect data, even outside of the corporate firewall. With a backup and DR strategy for dispersed data in place, you’ll continue to be in a position to make history, instead of fading away.

Enterprises embrace Kubernetes, but lack security tools to mitigate risk

Businesses increasingly embrace the moving of multiple applications to the cloud using containers and utilize Kubernetes for orchestration, according to Zettaset.

embrace Kubernetes

However, findings also confirm that organizations are inadequately securing the data stored in these new cloud-native environments and continue to leverage existing legacy security technology as a solution.

Businesses are faced with significant IT-related challenges as they strive to keep up with the demands of digital transformation. Now more than ever to maintain a competitive edge, companies are rapidly developing and deploying new applications.

Companies must invest in high performance data protection

The adoption of containers, microservices and Kubernetes for orchestration play a significant role in these digital acceleration efforts. And yet, while many companies are eager to adopt these new cloud-native technologies, research shows that companies are not accurately weighing the benefits of enterprise IT innovation with inherent security risks.

“Data security should be a fundamental requirement for any enterprise organization and the adoption of new technology should not change that,” said Tim Reilly, CEO, Zettaset.

“Our goal with this research was to determine whether enterprise organizations who are actively transitioning from DevOps to DevSecOps are investing in proper security and data protection technology. And while findings confirm that companies are in fact making the strategic decision to shift towards cloud-native environments, they are currently ill-equipped to secure their company’s most critical asset: data.

“Companies must invest in high performance data protection so as it to secure critical information in real-time across any architecture.”

The conclusions

  • Organizations are embracing the cloud and cloud-native technologies: 39% of respondents have multiple production applications deployed on Kubernetes. But, companies are still struggling with the complexities associated with these environments and how to secure deployments.
  • Cloud providers offer considerable influence with regards to Kubernetes distribution: A little over half of those surveyed are using open source Kubernetes available through the Cloud Native Computing Foundation (CNCF). And 34.7% of respondents are using a Kubernetes offering managed by an existing cloud provider such as AWS, Google, Azure, and IBM.
  • Kubernetes security best practices have yet to be identified: 60.1% of respondents believe there is a lack of proper education and awareness of the proper ways to mitigate risk associated with storing data in cloud-native environments. And 43.2% are confident that multiple vulnerable attack surfaces are created with the introduction of Kubernetes.
  • Companies have yet to evolve their existing security strategies: Almost half of respondents (46.5%) are using traditional data encryption tools to protect their data stored in Kubernetes clusters. Over 20% are finding that these traditional tools are not performing as desired.

embrace Kubernetes

“The results of our research substantiate the notion that enterprise organizations are moving forward with cloud-native technologies such as containers and Kubernetes. What we were most interested in discovering was how these companies are approaching security,” said Charles Kolodgy, security strategist and author of the report.

“Companies overall are concerned about the wide range of potential attack surfaces. They are applying legacy solutions but those are not designed to handle today’s ever-evolving threat landscape, especially as data is being moved off-premise to cloud-based environments.

“To stay ahead of what’s to come, companies must look to solutions purposely built to operate in a Kubernetes environment.”

How businesses rate their own security and compliance risks

SafeGuard Cyber announced the results of a survey of 600 senior enterprise IT and security professionals, conducted to understand how businesses rate their own security and compliance risks in the new digital reality of the workplace brought by the COVID-19 pandemic.

rate security risks

Rate security risks

Respondents were asked to effectively grade their adaptations to date, articulate what gaps still exist, and explain how they’re planning for the future. One-third of respondents reported their entire business process has changed and is still evolving, while 26% said they’ve rushed certain projects that were scheduled for later.

The study revealed the need to harden unconventional attack vectors in cloud, mobile, and social media technologies.

“Everyone in business understands the pandemic has had a seismic impact, but we were still surprised to learn how vulnerable organizations feel about the digital technologies they’ve adopted,” said Jim Zuffoletti, CEO, SafeGuard Cyber.

“Bad actors typically migrate to where the action is, so it makes sense digital communication channels are more likely to be targets. Surprisingly, marketing technologies moved up on the list, and we’re seeing more and more concern for executive leaders.”

Key findings

  • A significant disconnect and tension between the perceived security and compliance needs and the level of organizational planning. Despite perceived digital risk around unsanctioned apps, ransomware attacks, and varying tech stacks, only 18% of respondents reported cybersecurity as being a board-level concern.
  • 57% of those surveyed cited internal collaboration platforms – like Microsoft Teams and Slack – as the tech stack representing the most risk, followed closely by marketing technologies at 41%.
  • 1 in 4 respondents reported Executives’ personal social media as being an area of risk.
  • The biggest security and compliance challenge is the use of unsanctioned apps (52%), followed by trying to monitor business communications in multi-regional environments (43%), suggesting global enterprises are seeing more friction in scaling technologies for the digital workspace.
  • When it comes to purchasing new technology, 59% cite budget as the top concern, followed very closely by “impact on business outcomes” like revenue growth and agility (56%).

Davis Hake, Co-Founder of Resilience and Arceo.ai, concurred, “Incidents of business email compromise skyrocketed last year according to the FBI, with losses doubling from 2018 to reach $1.3B, but we know that with a move to remote work during the pandemic, cyber criminals aren’t just targeting email, they are increasingly targeting the digital collaboration platforms that are keeping our economy afloat.”

Enterprises are juggling the twin demands of budget constraints and the need to drive business outcomes.

“With the pandemic’s disruption to fundamental operations,” said Otavio Freire, CTO, SafeGuard Cyber.

“Simply saying ‘no’ to channels like WhatsApp or Slack is no longer an option. It’s the way business gets done today. As business leaders look to 2021, they will need security controls that enable rather than block new communication channels in order to sustain growth.”

76% of applications have at least one security flaw

The majority of applications contain at least one security flaw and fixing those flaws typically takes months, a Veracode report reveals.

applications security flaw

This year’s analysis of 130,000 applications found that it takes about six months for teams to close half the security flaws they find.

The report also uncovered some best practices to significantly improve these fix rates. There are some factors that teams have a lot of control over, and those they have very little control over categorizing them as “nature vs. nurture”.

Within the “nature” side, factors such as the size of the application and organization as well as security debt were considered, while the “nurture” side accounts for actions such as scanning frequency, cadence, and scanning via APIs.

Fixing security flaws: Nature or nurture?

The report revealed that addressing issues with modern DevSecOps practices results in higher flaw remediation rates. For example, using multiple application security scan types, working within smaller or more modern apps, and embedding security testing into the pipeline via an API all make a difference in reducing time to fix security defects, even in apps with a less than ideal “nature.”

“The goal of software security isn’t to write applications perfectly the first time, but to find and fix the flaws in a comprehensive and timely manner,” said Chris Eng, Chief Research Officer at Veracode.

“Even when faced with the most challenging environments, developers can take specific actions to improve the overall security of the application with the right training and tools.”

applications security flaw

Other key findings

Flawed applications are the norm: 76% of applications have at least one security flaw, but only 24% have high-severity flaws. This is a good sign that most applications do not have critical issues that pose serious risks to the application. Frequent scanning can reduce the time it takes to close half of observed findings by more than three weeks.

Open source flaws on the rise: while 70% of applications inherit at least one security flaw from their open source libraries, 30% of applications have more flaws in their open source libraries than in the code written in-house.

The key lesson is that software security comes from getting the whole picture, which includes identifying and tracking the third-party code used in applications.

Multiple scan types prove efficacy of DevSecOps: teams using a combination of scan types including static analysis (SAST), dynamic analysis (DAST), and software composition analysis (SCA) improve fix rates. Those using SAST and DAST together fix half of flaws 24 days faster.

Automation matters: those who automate security testing in the SDLC address half of the flaws 17.5 days faster than those that scan in a less automated fashion.

Paying down security debt is critical: the link between frequently scanning applications and faster remediation times has been established in a prior research.

This year’s report also found that reducing security debt – fixing the backlog of known flaws – lowers overall risk. Older applications with high flaw density experience much slower remediation times, adding an average of 63 days to close half of flaws.

Attacks on IoT devices continue to escalate

Attacks on IoT devices continue to rise at an alarming rate due to poor security protections and cybercriminals use of automated tools to exploit these vulnerabilities, according to Nokia.

attacks IoT devices

IoT devices most infected

The report found that internet-connected, or IoT, devices now make up roughly 33% of infected devices, up from about 16% in 2019. The report’s findings are based on data aggregated from monitoring network traffic on more than 150 million devices globally.

Adoption of IoT devices, from smart home security monitoring systems to drones and medical devices, is expected to continue growing as consumers and enterprises move to take advantage of the high bandwidth, ultra-low latency, and fundamentally new networking capabilities that 5G mobile networks enable, according to the report.

The rate of success in infecting IoT devices depends on the visibility of the devices to the internet, according to the report. In networks where devices are routinely assigned public facing internet IP addresses, a high infection rate is seen.

In networks where carrier-grade Network Address Translation is used, the infection rate is considerably reduced, because the vulnerable devices are not visible to network scanning.

Cybercriminals taking advantage of the pandemic

The report also reveals there is no let up in cybercriminals using the COVID-19 pandemic to try to steal personal data through a variety of types of malware. One in particular is disguised as a Coronavirus Map application – mimicking the legitimate and authoritative Coronavirus Map issued by Johns Hopkins University – to take advantage of the public’s demand for accurate information about COVID-19 infections, deaths and transmissions.

But the bogus application is used to plant malware on victims’ computers to exploit personal data. “Cybercriminals are playing on people’s fears and are seeing this situation as an opportunity to promote their agendas,” the report says. The report urges the public to install applications only from trusted app stores, like Google and Apple.

Bhaskar Gorti, President and Chief Digital Officer, Nokia, said: “The sweeping changes that are taking place in the 5G ecosystem, with even more 5G networks being deployed around the world as we move to 2021, open ample opportunities for malicious actors to take advantage of vulnerabilities in IoT devices.

“This report reinforces not only the critical need for consumers and enterprises to step up their own cyber protection practices, but for IoT device producers to do the same.”

78% of Microsoft 365 admins don’t activate MFA

On average, 50% of users at enterprises running Microsoft 365 are not managed by default security policies within the platform, according to CoreView.

Microsoft 365 MFA

Microsoft 365 administrators fail to implement basic security like MFA

The survey research shows that approximately 78% of Microsoft 365 administrators do not have multi-factor authentication (MFA) activated.

According to SANS, 99% of data breaches can be prevented using MFA. This is a huge security risk, particularly during a time when so many employees are working remotely.

Microsoft 365 admins given excessive control

Microsoft 365 administrators are given excessive control, leading to increased access to sensitive information. 57% of global organizations have Microsoft 365 administrators with excess permissions to access, modify, or share critical data.

In addition, 36% of Microsoft 365 administrators are global admins, meaning these administrators can essentially do whatever they want in Microsoft 365. CIS O365 security guidelines suggests limiting the number of global admins to two-four operators maximum per business.

Investing in productivity and operation apps without considering security implications

The data shows that US enterprises (on average, not collectively) utilize more than 1,100 different productivity and operations applications, which indicates a strong dedication to the growing needs of business across departments, locations, and time zones.

While increased access to productivity and operations apps helps fuel productivity, unsanctioned shadow IT apps have varying levels of security, while unsanctioned apps represent a significant security risk.

Shadow IT is ripe for attack and according to a Gartner prediction, this year, one-third of all successful attacks on enterprises will be against shadow IT resources.

Many orgs underestimate security and governance responsibilities

Many businesses underestimate the security and governance responsibilities they take on when migrating to Microsoft 365. IT leaders often assume that Microsoft 365 has built-in, fool-proof frameworks for critical IT-related decisions, such as data governance, securing business applications, and prioritizing IT investments and principles.

The research disprove this by revealing that many organizations struggle with fundamental governance and security tasks for their Microsoft 365 environment. Today’s remote and hybrid working environment requires IT leaders to be proactive in prioritizing security and data governance in Microsoft 365.

Compliance activities cost organizations $3.5 million annually

Organizations are struggling to keep up with IT security and privacy compliance regulations, according to a Telos survey.

compliance cost

Annual compliance cost

The survey, which polled 300 IT security professionals in July and August 2020, revealed that, on average, organizations must comply with 13 different IT security and/or privacy regulations and spend $3.5 million annually on compliance activities, with compliance audits consuming 58 working days each quarter.

As more regulations come into existence and more organizations migrate their critical systems, applications and infrastructure to the cloud, the risk of non-compliance and associated impact increases.

Key research findings

  • IT security professionals report receiving an average of over 17 audit evidence requests each quarter and spend an average of three working days responding to a single request
  • Over the last 24 months, organizations have been found non-compliant an average of six times by both internal and third party auditors resulting in an average of eight fines, costing an average of $460,000
  • 86 percent of organizations believe compliance would be an issue when moving systems, applications and infrastructure to the cloud
  • 94 percent of organizations report they would face challenges when it comes to IT security compliance and/or privacy regulations in the cloud

Compliance teams are overwhelmed

“Compliance teams spend 232 working days each year responding to audit evidence requests, in addition to the millions of dollars spent on compliance activities and fines,” said Dr. Ed Amoroso, CEO of TAG Cyber. “The bottom line is this level of financial and time commitment is unsustainable in the long run.”

“As hammer, chisel and stone gave way to clipboard, paper and pencil, it’s time for organizations to realize the days of spreadsheets for ‘checkbox compliance’ are woefully outdated,” said Steve Horvath, VP of strategy and cloud at Telos.

Automation can solve numerous compliance challenges, as the data shows. It’s the only real way to get in front of curve, rather than continuing to try and keep up.”

99 percent of survey respondents indicated their organization would benefit from automating IT security and/or privacy compliance activities, citing expected benefits such as increased accuracy of evidence (54 percent), reduced time spent being audited (51 percent) and the ability to respond to audit evidence requests more quickly (50 percent).

SaaS adoption prompting concerns over operational complexity and risk

A rise in SaaS adoption is prompting concerns over operational complexity and risk, a BetterCloud report reveals.

SaaS adoption risk

Since 2015, the number of IT-sanctioned SaaS apps has increased tenfold, and it’s expected that by 2025, 85 percent of business apps will be SaaS-based. With SaaS on the rise, 49 percent of respondents are confident in their ability to identify and monitor unsanctioned SaaS usage on company networks—yet 76 percent see unsanctioned apps as a security risk.

And when asked what SaaS applications are likely to hold the most sensitive data across an organization, respondents believe it’s all apps including cloud storage, email, devices, chat apps, password managers, etc.

Concerns when managing SaaS environments

Respondents also highlighted slow, manual management tasks as a prime concern when managing SaaS environments. IT organizations spend over 7 hours offboarding a single employee from a company’s SaaS apps, which takes time and energy from more strategic projects.

“In the earlier part of the year, organizations around the world were faced with powering their entire workforces from home and turned to SaaS to make the shift with as little disruption to productivity as possible,” said David Politis, CEO, BetterCloud.

“Up until this point, most companies were adopting a cloud-first approach for their IT infrastructure — that strategy has now shifted to cloud only. But SaaS growth at this scale has also brought about challenges as our 2020 State of SaaSOps report clearly outlines.

“The findings also show increased confidence and reliance on SaaSOps as the path forward to reigning in SaaS management and security.”

SaaS adoption risk: Key findings

  • On average, organizations use 80 SaaS apps today. This is a 5x increase in just three years and a 10x increase since 2015.
  • The top two motivators for using more SaaS apps are increasing productivity and reducing costs.
  • Only 49 percent of IT professionals inspire confidence in their ability to identify and monitor unsanctioned SaaS usage on company networks—yet more than three-quarters (76 percent) see unsanctioned apps as a security risk.
  • The top five places where sensitive data lives are: 1. files stored in cloud storage, 2. email, 3. devices, 4. chat apps, and 5. password managers. But because SaaS apps have become the system of record, sensitive data inevitably lives everywhere in your SaaS environment.
  • The top two security concerns are sensitive files shared publicly and former employees retaining data access.
  • IT teams spend an average of 7.12 hours offboarding a single employee from a company’s SaaS apps.
  • Thirty percent of respondents already use the term SaaSOps in their job title or plan to include it soon.

For the report surveyed nearly 700 IT leaders and security professionals from the world’s leading enterprise organizations. These individuals ranged in seniority from C-level executives to front-line practitioners and included both IT and security department roles.

Priorities and technologies defining the mainframe in the digital enterprise

There’s an overwhelming support for mainstreaming the mainframe, new strategic priorities, and a resurgence of next generation mainframe talent, according to a BMC survey.

mainframe priorities

The study queried over 1000 executives and practitioners on their priorities, challenges, and growth opportunities for the platform. High-level insights include:

  • 90% of respondents see the mainframe as a platform for growth and long-term applications.
  • 68% expect MIPS, the mainframe’s measure of computing performance, to grow.
  • 63% of respondents say security and compliance were their top mainframe priorities.
  • More than half of survey respondents increased mainframe platform data and transaction volume by 25% or more, signaling its ongoing importance in the digital business environment.

“The Mainframe Survey validates that businesses see the mainframe as a critical component of the modern digital enterprise and an emerging hub for innovation,” says Stephen Elliot, Program VP, Management Software and DevOps, IDC.

“They’re putting it to work more and more to support digital business demands as they strive to achieve greater agility and success across the enterprise.”

Top mainframe priorities

With mainframe enterprises competing to bring new, digital experiences to market to delight customers, the survey’s themes are resoundingly strong: adapt, automate, and secure.

Adapt – responses indicated that enterprises’ need to adapt spanned several areas:

  • New processes to keep up with digital demand.
  • Technology demands such as application development/DevOps across the mainframe; 78% of respondents want to be able to update mainframe applications more frequently than currently possible.
  • Changing workforce, as the number of next generation mainframe talent increases along with the number of women working on the platform.

Automate – mainframe modernization continues to play a key role in priorities among respondents with the need to implement AI and machine learning strategies jumping by 8% year over year.

Secure – while the mainframe has a reputation of being a naturally secure platform, respondents are seeing the growing need to fortify its “walls.” Security trumped cost optimization as the leading mainframe priority among respondents for the first time in the 15-year history of the survey.

“Early results were shared with leading industry analysts and key customers from our Mainframe Executive Council in order to validate findings with market sentiment,” states John McKenny, SVP of Mainframe Innovation and Strategy at BMC.

“These conversations further solidified the study’s findings that the platform’s positive outlook and growth is largely due to the need to create intuitive, customer-centric digital experiences. The mainframe continues to shine as innovative, agile, and secure and is a vital component to digital success.”

Workforce demographic shifts

The survey revealed the demographic shifts in mainframe operations, as younger, less experienced staff replaces departing senior staff, and a higher proportion of women respondents than last year.

Progress in implementing ethical and trusted AI-enabled systems still inconsistent

COVID-19 has put a spotlight on ethical issues emerging from the increased use of AI applications and the potential for bias and discrimination.

ethical AI systems

A report from the Capgemini Research Institute found that in 2020 45% of organizations have defined an ethical charter to provide guidelines on AI development, up from 5% in 2019, as businesses recognize the importance of having defined standards across industries.

However, a lack of leadership in terms of how these systems are developed and used is coming at a high cost for organizations.

The report notes that while organizations are more ethically aware, progress in implementing ethical AI has been inconsistent. For example, the progress on “fairness” (65%) and “auditability” (45%) dimensions of ethical AI has been non-existent, while transparency has dropped from 73% to 59%, despite the fact that 58% of businesses say they have been building awareness amongst employees about issues that can result from the use of AI.

The research also reveals that 70% of customers want a clear explanation of results and expect organizations to provide AI interactions that are transparent and fair.

Ethical governance has become a prerequisite

The need for organizations to implement an ethical charter is also driven by increased regulatory frameworks. For example, the European Commission has issued guidelines on the key ethical principles that should be used for designing AI applications.

Meanwhile, guidelines issued by the FTC in early 2020 call for transparent AI, stating that when an AI-enabled system makes an adverse decision (such as declining credit for a customer), then the organization should show the affected consumer the key data points used in arriving at the decision and give them the right to change any incorrect information.

However, while globally 73% of organizations informed users about the ways in which AI decisions might affect them in 2019, today, this has dropped to 59%.

According to the report, this is indicative of current circumstances brought about by COVID-19, growing complexity of AI models, and a change in consumer behavior, which has disrupted the functionalities of the AI algorithms.

New factors, including a preference of safety, bulk buying, and a lack of training data for similar situations from the past, has meant that organizations are redesigning their systems to suit a new normal; however, this has led to less transparency.

Discriminatory bias with AI systems come at a high cost for orgs

Many public and private institutions deployed a range of AI technologies during COVID-19 in an attempt to curtail the impacts wrought by the pandemic. As these continue, it is critical for organizations to uphold customer trust by furthering positive relationships between AI and consumers. However, reports show that datasets collected for healthcare and the public sector are subjected to social and cultural bias.

This is not limited to just the public sector. The research found that 65% of executives said they were aware of the issue of discriminatory bias with AI systems. Further, close to 60% of organizations have attracted legal scrutiny and 22% have faced a customer backlash in the last two to three years because of decisions reached by AI systems.

In fact, 45% of customers noted they will share their negative experiences with family and friends and urge them not to engage with an organization, 39% will raise their concerns with the organization and demand an explanation, and 39% will switch from the AI channel to a higher-cost human interaction. 27% of consumers say they would cease dealing with the organization altogether.

Establish ownership of ethical issues – leaders must be accountable

Only 53% of organizations have a leader who is responsible for the ethics of AI systems at their organization, such as a Chief Ethics Officer. It is crucial to establish leadership at the top to ensure these issues receive due priority from top management and to create ethically robust AI systems.

In addition, leaders in business and technology functions must be fully accountable for the ethical outcomes of AI applications. Our research shows that only half said they had a confidential hotline or ombudsman to enable customers and employees to raise ethical issues with AI systems.

The report highlights seven key actions for organizations to build an ethically robust AI system, which need to be underpinned by a strong foundation of leadership, governance, and internal practices:

  • Clearly outline the intended purpose of AI systems and assess its overall potential impact
  • Proactively deploy AI for the benefit of society and environment
  • Embed diversity and inclusion principles throughout the lifecycle of AI systems
  • Enhance transparency with the help of technology tools
  • Humanize the AI experience and ensure human oversight of AI systems
  • Ensure technological robustness of AI systems
  • Protect people’s individual privacy by empowering them and putting them in charge of AI interactions

Anne-Laure Thieullent, Artificial Intelligence and Analytics Group Offer Leader at Capgemini, explains, “Given its potential, it would be a disservice if the ethical use of AI is only limited to ensure no harm to users and customers. It should be a proactive pursuit of environmental good and social welfare.

“AI is a transformational technology with the power to bring about far-reaching developments across the business, as well as society and the environment. This means governmental and non-governmental organizations that possess the AI capabilities, wealth of data, and a purpose to work for the welfare of society and environment must take greater responsibility in tackling these issues to benefit societies now and in the future.”

NIST crowdsourcing challenge aims to de-identify public data sets to protect individual privacy

NIST has launched a crowdsourcing challenge to spur new methods to ensure that important public safety data sets can be de-identified to protect individual privacy.

NIST crowdsourcing challenge

The Differential Privacy Temporal Map Challenge includes a series of contests that will award a total of up to $276,000 for differential privacy solutions for complex data sets that include information on both time and location.

Critical applications vulnerability

For critical applications such as emergency planning and epidemiology, public safety responders may need access to sensitive data, but sharing that data with external analysts can compromise individual privacy.

Even if data is anonymized, malicious parties may be able to link the anonymized records with third-party data and re-identify individuals. And, when data has both geographical and time information, the risk of re-identification increases significantly.

“Temporal map data, with its ability to track a person’s location over a period of time, is particularly helpful to public safety agencies when preparing for disaster response, firefighting and law enforcement tactics,” said Gary Howarth, NIST prize challenge manager.

“The goal of this challenge is to develop solutions that can protect the privacy of individual citizens and first responders when agencies need to share data.”

Protecting PII

Differential privacy provides much stronger data protection than anonymity; it’s a provable mathematical guarantee that protects personally identifiable information (PII).

By fully de-identifying data sets containing PII, researchers can ensure data remains useful while limiting what can be learned about any individual in the data regardless of what third-party information is available.

The individual contests that make up the challenge will include a series of three “sprints” in which participants develop privacy algorithms and compete for prizes, as well as a scoring metrics development contest (A Better Meter Stick for Differential Privacy Contest) and a contest designed to improve the usability of the solvers’ source code (The Open Source and Development Contest).

The Better Meter Stick for Differential Privacy Contest will award a total prize purse of $29,000 for winning submissions that propose novel scoring metrics by which to assess the quality of differentially private algorithms on temporal map data.

The three Temporal Map Algorithms sprints will award a total prize purse of $147,000 over a series of three sprints to develop algorithms that preserve data utility of temporal and spatial map data sets while guaranteeing privacy.

The Open Source and Development Contest will award a total prize purse of $100,000 to teams leading in the sprints to increase their algorithm’s utility and usability for open source audiences.

85% of COVID-19 tracking apps leak data

71% of healthcare and medical apps have at least one serious vulnerability that could lead to a breach of medical data, according to Intertrust.

COVID-19 tracking apps leak data

The report investigated 100 publicly available global mobile healthcare apps across a range of categories—including telehealth, medical device, health commerce, and COVID-tracking—to uncover the most critical mHealth app threats.

Cryptographic issues pose one of the most pervasive and serious threats, with 91% of the apps in the study failing one or more cryptographic tests. This means the encryption used in these medical apps can be easily broken by cybercriminals, potentially exposing confidential patient data, and enabling attackers to tamper with reported data, send illegitimate commands to connected medical devices, or otherwise use the application for malicious purposes.

Bringing medical apps security up to speed

The study’s overall findings suggest that the push to reshape care delivery under COVID-19 has often come at the expense of mobile application security.

“Unfortunately, there’s been a history of security vulnerabilities in the healthcare and medical space. Things are getting a lot better, but we still have a lot of work to do.” said Bill Horne, General Manager of the Secure Systems product group and CTO at Intertrust.

“The good news is that application protection strategies and technologies can help healthcare organizations bring the security of their apps up to speed.”

The report on healthcare and medical mobile apps is based on an audit of 100 iOS and Android applications from healthcare organizations worldwide. All 100 apps were analyzed using an array of static application security testing (SAST) and dynamic application security testing (DAST) techniques based on the OWASP mobile app security guidelines.

COVID-19 tracking apps leak data

Report highlights

  • 71% of tested medical apps have at least one high level security vulnerability. A vulnerability is classified as high if it can be readily exploited and has the potential for significant damage or loss.
  • The vast majority of medical apps (91%) have mishandled and/or weak encryption that puts them at risk for data exposure and IP (intellectual property) theft.
  • 34% of Android apps and 28% of iOS apps are vulnerable to encryption key extraction.
  • The majority of mHealth apps contain multiple security issues with data storage. For instance, 60% of tested Android apps stored information in SharedPreferences, leaving unencrypted data readily readable and editable by attackers and malicious apps.
  • When looking specifically at COVID-tracking apps, 85% leak data.
  • 83% of the high-level threats discovered could have been mitigated using application protection technologies such as code obfuscation, tampering detection, and white-box cryptography.

Shift to remote work and heavy reliance on service providers for security leaves blind spots

83% of C-level executives expect the changes they made in the areas of people, processes, and applications as a response to the COVID-19 pandemic to become permanent (whether significant or partial), according to Radware​.

remote work security blind spots

According to the report, ​pandemic-driven changes affected various aspects of business, 44% of executives surveyed reported a negative negative impact on budgets, 43% reported a workforce reduction, while 37% reported reduced real estate footprints.

Pandemic accelerated cloud adoption

The pandemic accelerated the migration of business infrastructure and applications into the cloud. 76% of companies adopted cloud services faster than they had planned, and 56% of respondents said that the contactless economy – e-commerce, on-demand content, video conferencing, etc.- had a positive impact on their business.​ ​

The quick migration helped to maintain business operations but potentially exacerbated cybersecurity gaps, due to an increased attack surface. 40% of survey respondents reported an increase in cyberattacks amid the pandemic. 32% said that they relied on their cloud provider’s security services to provide security management for their public cloud assets.

“The transition to remote work and new online contactless business models is not temporary and is affecting the future strategy on how organizations invest in cybersecurity,” said Anna Convery-Pelletier, CMO at Radware.

“Normally, businesses would make this shift over an extended period of time. However, the pandemic forced a massive shift to remote work which is now creating new security challenges.”

“Before the pandemic, digital transformation was a long-term strategic goal for most businesses,” said Michael O’Malley, VP of Market Strategy for Radware.

“On-demand content consumption, contactless payments, curbside pickups, and remote workforces are now business imperatives. Executives must revisit what they’ve implemented to ensure that a lack of cybersecurity planning does not undermine their goals.”

remote work security blind spots

Other key findings

  • Shift to remote operations​: More than 80% of respondents said they believed more than 25% of their employees would work remotely in the future, a sharp contrast to pre-pandemic work-from-home policies, when only 48% of companies enabled more than 25% of their employees do so, and 6% did not enable remote work at all.
  • Emergence of new revenue models to support contactless economy​: Roughly two in five respondents from the retail sector said they made real estate changes – including store closures. Many retailers faced pressure to adopt practices that ease the customer experience, such as curbside pickup, e-commerce, and increased use of contactless payments. More than any other sector, retailers reported the need to adopt cloud or hybrid cloud environments to make their networks more resilient, 57% said they plan to host their assets in either a public or private cloud environment by 2022.

Mobile messengers expose billions of users to privacy attacks

Popular mobile messengers expose personal data via discovery services that allow users to find contacts based on phone numbers from their address book, according to researchers.

mobile messengers privacy

When installing a mobile messenger like WhatsApp, new users can instantly start texting existing contacts based on the phone numbers stored on their device. For this to happen, users must grant the app permission to access and regularly upload their address book to company servers in a process called mobile contact discovery.

A recent study by a team of researchers from the Secure Software Systems Group at the University of Würzburg and the Cryptography and Privacy Engineering Group at TU Darmstadt shows that currently deployed contact discovery services severely threaten the privacy of billions of users.

Utilizing very few resources, the researchers were able to perform practical crawling attacks on the popular messengers WhatsApp, Signal, and Telegram. The results of the experiments demonstrate that malicious users or hackers can collect sensitive data at a large scale and without noteworthy restrictions by querying contact discovery services for random phone numbers.

Attackers are enabled to build accurate behavior models

For the extensive study, the researchers queried 10% of all US mobile phone numbers for WhatsApp and 100% for Signal. Thereby, they were able to gather personal (meta) data commonly stored in the messengers’ user profiles, including profile pictures, nicknames, status texts and the “last online” time.

The analyzed data also reveals interesting statistics about user behavior. For example, very few users change the default privacy settings, which for most messengers are not privacy-friendly at all.

The researchers found that about 50% of WhatsApp users in the US have a public profile picture and 90% a public “About” text. Interestingly, 40% of Signal users, which can be assumed to be more privacy concerned in general, are also using WhatsApp, and every other of those Signal users has a public profile picture on WhatsApp.

Tracking such data over time enables attackers to build accurate behavior models. When the data is matched across social networks and public data sources, third parties can also build detailed profiles, for example to scam users.

For Telegram, the researchers found that its contact discovery service exposes sensitive information even about owners of phone numbers who are not registered with the service.

Which information is revealed during contact discovery and can be collected via crawling attacks depends on the service provider and the privacy settings of the user. WhatsApp and Telegram, for example, transmit the user’s entire address book to their servers.

More privacy-concerned messengers like Signal transfer only short cryptographic hash values of phone numbers or rely on trusted hardware. However, the research team shows that with new and optimized attack strategies, the low entropy of phone numbers enables attackers to deduce corresponding phone numbers from cryptographic hashes within milliseconds.

Moreover, since there are no noteworthy restrictions for signing up with messaging services, any third party can create a large number of accounts to crawl the user database of a messenger for information by requesting data for random phone numbers.

“We strongly advise all users of messenger apps to revisit their privacy settings. This is currently the most effective protection against our investigated crawling attacks,” agree Prof. Alexandra Dmitrienko (University of Würzburg) and Prof. Thomas Schneider (TU Darmstadt).

Impact of research results: Service providers improve their security measures

The research team reported their findings to the respective service providers. As a result, WhatsApp has improved their protection mechanisms such that large-scale attacks can be detected, and Signal has reduced the number of possible queries to complicate crawling.

The researchers also proposed many other mitigation techniques, including a new contact discovery method that could be adopted to further reduce the efficiency of attacks without negatively impacting usability.

Multi-access edge computing market to reach $7.23 billion by 2024

Edge computing is a foundational technology for industrial enterprises as it offers shorter latencies, robust security, responsive data collection, and lower costs, Frost & Sullivan finds.

multi-access edge computing market

In this hyper-connected industrial environment, edge computing, with its solution-agnostic attribute, can be used across various applications, such as autonomous assets, remote asset monitoring, data extraction from stranded assets, autonomous robotics, autonomous vehicles, and smart factories.

Multi-access edge computing market growth rate and revenue

Despite being in a nascent stage, the multi-access edge computing (MEC) market – an edge computing commercial offering from operators in wireless networks – is estimated to grow at an astounding compound annual growth rate of 157.4%, garnering a revenue of $7.23 billion by 2024 from $64.1 million in 2019.

“The recent launch of the 5G technology coupled with MEC brings computing power close to customers and also allows the emergence of new applications and experiences for them,” said Renato Pasquini, Information & Communication Technologies Research Director at Frost & Sullivan.

“Going forward, 5G and MEC are an opportunity for telecom operators to launch innovative offerings and also enable an ecosystem to flourish in the business-to-business (B2B) segment of telecom service providers using the platform.”

Pasquini added: “From the perspective of the MEC ecosystem, software—edge application and solutions—promises the highest CAGR followed by services—telecom operators’ services, cloud providers’ infrastructure-as-a-service, and edge data center colocation services.”

Growth prospects for MEC market participants

It is predicted that approximately 90% of industrial enterprises will utilize edge computing by 2022, presenting immense growth prospects for MEC market participants, including:

  • Telecom operators should work on solutions and services to meet the requirements for connected and autonomous cars.
  • System integrators should provide end-to-end solutions, which would be a significant value addition for enterprises because 5G requires specific skillsets.
  • The combination of 5G and the new specialized hardware-based mobile edge compute technologies can meet the market’s streaming media needs now and in the future.
  • Telecom operators must partner with cloud providers and companies with abilities related to artificial intelligence, machine learning, and computer vision to design solutions for autonomous cars, drone delivery, and others.
  • Companies in the MEC space must capitalize on the opportunity for innovation and new developments that utilize 5G and MEC, such as augmented reality (AR) and virtual reality (VR), which can also be applied to games.

A look at enterprise network and application modernization efforts

80% of organizations are struggling to reach application delivery requirements with their existing infrastructure. But, amid pandemic concerns, efforts to modernize networks and applications to address this challenge are accelerating with 83% reporting budget increases for these initiatives over the next three years, NS1 reveals.

enterprise network application

“Modernization was already on the radar for many organizations, but the pandemic has shocked the system and created a heightened sense of urgency,” said Kris Beevers, CEO, NS1. “Our research shows that IT leaders are accelerating projects aimed to increase efficiencies and business agility, improve application performance and user experiences, and drive additional revenue.”

Challenges to enterprise network and application modernization efforts

Within the broad scope of IT modernization, companies are prioritizing transformation initiatives for mobility (70%), remote data access (68%), automation (65%), security (61%), and IT resilience (60%).

Other areas where efforts are accelerating include public and private cloud deployments (58% and 57% respectively), improvements to scalability (58%) and deployment velocity (56%).

And yet, even with the heightened sense of urgency and budget behind them, survey respondents reported facing a number of obstacles in their IT modernization projects. Although four out of five acknowledge some progress with modernization, only 8% report that they have achieved their initial objectives, and 28% report “significant progress” (75% or greater).

Challenges to modernization include a talent and skills gap and competing priorities (37% each), as well as aging networks (35%) and the outdated, inflexible organizational structures that often come with them.

“Static, legacy tech drags down modernization efforts because it lacks the flexibility and agility necessary to support dynamic, scalable applications and IT environments,” added Beevers.

“Successful digital transformation starts with the underlying enterprise network and application infrastructure — DNS, DHCP and IP address management. When purpose-built for speed, reliability and scalability, these foundational technologies are critical in expediting modernization projects, automating network management tasks, and increasing efficiency and operational velocity in complex heterogeneous environments.”

Adoption and trends in the modern IT landscape

The study examined the adoption of modern technology across mid- to large-sized companies and uncovered the following trends.

The study found that 45% of respondents are currently using DDI, and another 48% plan to adopt the technology within 12 months. Adopters reported the most common use cases to be accelerating service discovery in microservices environments (60%) and connecting cloud and on-premise applications and data (56%).

enterprise network application

Those with plans to implement DDI cited the following use cases as the most appealing:

  • Connecting cloud and on-premise applications and data (59%)
  • Accelerating application delivery (55%)
  • Automating network management tasks (54%)
  • Accelerating service discovery in microservices environments (42%)
  • Controlling costs associated with application and network management (40%)

Modern application stack

Nearly all companies are adopting modern application stack solutions, many of which are aimed directly at addressing network and application performance requirements, including:

  • Network monitoring tools, which 96% of respondents were either already implementing or planning to, within 12 months
  • Public/private cloud, multi-cloud – 94%
  • Automation and orchestration solutions – 93%
  • Intelligent traffic management – 87%
  • Multi-CDN – 85%

PinK: A new way of implementing a key-value store in SSDs

As web services, cloud storage, and big-data services continue expanding and finding their way into our lives, the gigantic hardware infrastructures they rely on–known as data centers – need to be improved to keep up with the current demand.

key-value store

One promising solution for improving the performance and reducing the energy load associated with reading and writing large amounts of data is to confer storage devices with some computational capabilities and offload part of the data read/write process from CPUs.

A new way of implementing a key-value store

In a recent study, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, describe a new way of implementing a key-value store in solid state drives (SSDs), which offers many advantages over a more widely used method.

A key-value store (also known as key-value database) is a way of storing, managing, and retrieving data in the form of key-value pairs. The most common way to implement one is through the use of a hash function, an algorithm that can quickly match a given key with its associated stored data to achieve fast read/write access.

One of the main problems of implementing a hash-based key-value store is that the random nature of the hash function occasionally leads to long delays (latency) in read/write operations. To solve this problem, the researchers from DGIST implemented a different paradigm, called “log-structured merge-tree (LSM).” This approach relies on ordering the data hierarchically, therefore putting an upper bound on the maximum latency.

Letting storage devices compute some operations by themselves

In their implementation, nicknamed “PinK,” they addressed the most serious limitations of LSM-based key-value stores for SSDs. With its optimized memory use, guaranteed maximum delays, and hardware accelerators for offloading certain sorting tasks from the CPU, PinK represents a novel and effective take on data storage for SSDs in data centers.

Professor Sungjin Lee, who led the study, remarks: “Key-value store is a widely used fundamental infrastructure for various applications, including Web services, artificial intelligence applications, and cloud systems. We believe that PinK could greatly improve the user-perceived performance of such services.”

So far, experimental results confirm the performance gains offered by this new implementation and highlight the potential of letting storage devices compute some operations by themselves.

“We believe that our study gives a good direction of how computational storage devices should be designed and built and what technical issues we should address for efficient in-storage computing,” Prof Lee concludes.

Researchers create tool for protecting children’s online privacy

A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children’s online privacy.

protecting children's online privacy

Dr. Kanad Basu, assistant professor of electrical and computer engineering in the Erik Jonsson School of Engineering and Computer Science and lead author of the study, along with colleagues elsewhere, developed a tool that can determine whether an Android game or other mobile app complies with the federal Children’s Online Privacy Protection Act (COPPA).

The researchers introduced and tested their “COPPA Tracking by Checking Hardware-Level Activity,” or COPPTCHA, tool in a study. The tool was 99% accurate. Researchers continue to improve the technology, which they plan to make available for download at no cost.

Games and other apps that violate COPPA pose privacy risks

Basu said games and other apps that violate COPPA pose privacy risks that could make it possible for someone to determine a child’s identity and location. He said the risk is heightened as more people are accessing apps from home, rather than public places, due to the COVID-19 pandemic.

“Suppose the app collects information showing that there is a child on Preston Road in Plano, Texas, downloading the app. A trafficker could potentially get the user’s email ID and geographic location and try to kidnap the child. It’s really, really scary,” Basu said.

Apps can access personal identifiable information, including names, email addresses, phone numbers, location, audio and visual recordings, and unique identifiers for devices such as an international mobile equipment identity (IMEI), media access control (MAC) addresses, Android ID and Android advertising ID.

The advertising ID, for example, allows app developers to collect information on users’ interests, which they can then sell to advertisers.

“When you download an app, it can access a lot of information on your cellphone,” Basu said. “You have to keep in mind that all this info can be collected by these apps and sent to third parties. What do they do with it? They can pretty much do anything. We should be careful about this.”

Protecting children’s online privacy

The researchers’ technique accesses a device’s special-purpose register, a type of temporary data-storage location within a microprocessor that monitors various aspects of the microprocessor’s function. Whenever an app transmits data, the activity leaves footprints that can be detected by the special-purpose register.

COPPA requires that websites and online services directed to children obtain parental consent before collecting personal information from anyone younger than 13; however, as Basu’s research found, many popular apps do not comply. He found that many popular games designed specifically for young children revealed users’ Android IDs, Android advertising IDs and device descriptions.

Basu recommends that parents use caution when downloading or allowing children to download apps.

“If your kid asks you to download a popular game app, you’re likely to download it,” Basu said. “A problem with our society is that many people are not aware of — or don’t care about – the threats in terms of privacy.”

Basu advises keeping downloads to a minimum.

“I try to limit my downloading of apps as much as possible,” Basu said. “I don’t download apps unless I need to.”

Bad habits and risky behaviors put corporate data at risk

IT and application development professionals tend to exhibit risky behaviors when organizations impose strict IT policies, according to SSH.

risky behaviors

Polling 625 IT and application development professionals across the United States, United Kingdom, France, and Germany, the survey verified that hybrid IT is on the rise and shows no signs of slowing down.

Fifty-six percent of respondents described their IT environment as hybrid cloud, an increase from 41 percent a year ago. On average, companies are actively using two cloud service vendors at a time.

While hybrid cloud offers a range of strategic benefits related to cost, performance, security, and productivity, it also introduces the challenge of managing more cloud access.

Cloud access solutions slowing down work

The survey found that cloud access solutions, including privileged access management software, slow down daily work for 71 percent of respondents. The biggest speed bumps were cited as configuring access (34 percent), repeatedly logging in and out (30 percent), and granting access to other users (29 percent).

These hurdles often drive users to seek risky workarounds, with 52 percent of respondents claiming they would “definitely” or at least “consider” bypassing secure access controls if they were under pressure to meet a deadline.

85 percent of respondents also share account credentials with others out of convenience, even though 70 percent understand the risks of doing so. These risks are further exacerbated when considering that 60 percent of respondents use unsecure methods to store their credentials and passwords, including in email, in non-encrypted files or folders, and on paper.

“As businesses grow their cloud environments, secure access to the cloud will continue be paramount. But when access controls lead to a productivity trade-off, as this research has shown, IT admins and developers are likely to bypass security entirely, opening the organization up to even greater cyber risk,” said Jussi Mononen, chief commercial officer at SSH.

“For privileged access management to be effective, it needs to be fast and convenient, without adding operational obstacles. It needs to be effortless.”

Orgs using public internet networks

In addition to exposing the risky behaviors of many IT and application development professionals when accessing the cloud, the survey also revealed some unwitting security gaps in organizations’ access management policies. For example, more than 40 percent of respondents use public internet networks – inherently less secure than private networks – to access internal IT resources.

Third-party access was also found to be a risk point, with 29 percent of respondents stating that outside contractors are given permanent access credentials to the business’ IT environment.

risky behaviors

Permanent credentials are fundamentally risky as they provide widespread access beyond the task at hand, and can be forgotten, stolen, mismanaged, misconfigured, or lost.

Mononen continued, “When it comes to access management, simpler is safer. Methods like single sign-on can streamline the user experience significantly, by creating fewer logins and fewer entry points that reduce the forming of bad IT habits.

“There is also power in eliminating permanent access credentials entirely, using ephemeral certificates that unlock temporary ‘just-in-time’ access to IT resources, only for time needed before access automatically expires. Ultimately, reducing the capacity for human error comes down to designing security solutions that put the user first and cut out unnecessary complexity.”