The risk of identity fraud has increased significantly with attacks occurring more frequently since the start of the pandemic, Onfido reveals. Over the past 12 months, the average identity document (ID) fraud rate increased by 41% over the previous year as first-time fraudsters appear to be more prevalent, likely due to increased economic hardships during the pandemic. The average ID fraud rate reached 5.8% – up from 4.1% the previous year (Oct 2018-2019). The increase … More
The post Unsophisticated fraud attacks increase, first-time fraudsters more prevalent appeared first on Help Net Security.
A study of face recognition technology created after the onset of the COVID-19 pandemic shows that some software developers have made demonstrable progress at recognizing masked faces.
The findings, produced by NIST, measure the performance of face recognition algorithms developed following the arrival of the pandemic. A previous report from July explored the effect of masked faces on algorithms submitted before March 2020, indicating that software available before the pandemic often had more trouble with masked faces.
“Some newer algorithms from developers performed significantly better than their predecessors. In some cases, error rates decreased by as much as a factor of 10 between their pre- and post-COVID algorithms,” said NIST’s Mei Ngan, one of the study’s authors.
“In the best cases, software algorithms are making errors between 2.4 and 5% of the time on masked faces, comparable to where the technology was in 2017 on nonmasked photos.”
65 newly submitted algorithms
The new study adds the performance of 65 newly submitted algorithms to those that were tested on masked faces in the previous round, offering cumulative results for 152 total algorithms.
Developers submitted algorithms to the FRVT voluntarily, but their submissions do not indicate whether an algorithm is designed to handle face masks, or whether it is used in commercial products.
Using the same set of 6.2 million images as it had previously, the team again tested the algorithms’ ability to perform “one-to-one” matching, in which a photo is compared with a different photo of the same person — a function commonly used to unlock a smartphone.
(The team did not test algorithms’ ability to perform “one-to-many” matching — often used to find matches in a large database — but plans to do so in a later round.) And as with the July report, the images had mask shapes digitally applied, rather than showing people wearing actual masks.
When both the new image and the stored image are of masked faces, error rates run higher
With a couple of notable exceptions, when the face was occluded in both photos, false match rates ran 10 to 100 times higher than if the original saved image showed an uncovered face.
Smartphones often use one-to-one matching for security, and it would be far more likely for a stranger to successfully unlock a phone if the saved image was of a masked person.
The more of a face a mask covers, the higher the algorithm’s error rate tends to be
Continuing a trend from the July 2020 report, round mask shapes — which cover only the mouth and nose — generated fewer errors than wide ones that stretch across the cheeks, and those covering the nose generated more errors than those that did not.
Mask colors affect the error rate
The new study explored the effects of two new mask colors — red and white — as well as the black and light blue masks the July study tested. While there were exceptions, the red and black masks tended to yield higher error rates than the other colors did. The research team did not investigate potential reasons for this effect.
A few algorithms perform well with any combination of masked or unmasked faces
Some developers have created “mask-agnostic” software that can handle images regardless of whether or not the faces are masked. The algorithms detect the difference automatically, without being told.
A final significant point that the research team makes also carries over from previous studies: Individual algorithms differ. End users need to get to know how their chosen software performs in their own specific situations, ideally using real physical masks rather than the digital simulations the team used in the study.
“It is incumbent upon the system owners to know their algorithm and their data,” Ngan said. “It will usually be informative to specifically measure accuracy of the particular algorithm on the operational image data collected with actual masks.”
As the number of data breaches shows no signs of decreasing, the clamor to replace passwords with biometric authentication continues to grow. Biometrics are becoming widely incorporated to secure organizations from unauthorized access and the growing appeal of these security solutions is expected to create a market worth $41.8 billion by 2023, according to MarketsandMarkets.
Password reuse is the fundamental reason why data breaches continue to happen. In recent years biometrics have increasingly been lauded as a superior authentication solution to passwords. However, biometrics are not immune from problems and once you look under the hood, they bring their own set of challenges.
There are several flaws, including one with potentially fatal implications, that organizations can’t and shouldn’t ignore when exploring biometric authentication. These include:
1. Biometrics are forever
This is the Achilles heel: once a biometric is exposed/compromised, you can’t replace it. There is no way to refresh or update your fingerprint, your retina, or your face. Therefore, if a user’s biometric information is exposed, then any account using this authentication method is at risk, and there is no way to reverse the damage.
Biometrics are on display, leaving them open to potential exploitation. For example, facial information can be obtained online or through a photo of someone, unlike passwords, which remain private unless stolen. With a detailed enough representation of a biometric marker, it’s possible to spoof it and, with the rise of deep-fake technology, it will become even easier to spoof biometrics.
As biometrics are forever, it’s vital that organizations make it as difficult as possible for hackers to crack the algorithm if there is a breach. They can do it by using a strong hashing algorithm and not storing any data in plain text.
2. Device/service limitations
Despite the ubiquity of devices with biometric scanners and the number of apps that support biometric authentication, many devices can’t incorporate the technology. While biometrics are commonplace in smart devices, this is not the case with many desktop or laptop computers, which still don’t include biometric readers. Also, when it comes to signing into websites via a browser, the use of biometric authentication is currently extremely limited. Therefore, until every device and browser is compatible, relying solely on biometric authentication is not even a possibility.
The most widespread consumer-oriented biometric authentication approaches (Apple’s TouchID/FaceID and the Android equivalents) are essentially client-side only – acting as a key that unlocks a locally stored set of authentication credentials for the target application or service.
While this approach works well for this use case and has the advantage of not storing sensitive biometric signatures on servers, it precludes the possibility of having this be the only authentication mechanism (i.e., if I try to access the service from a different device, I’ll have to re-authenticate using credentials such as a username and password before I can re-enable biometric authentication, assuming the new device even supports it). To truly have a biometric-first (or biometric-only) authentication approach, you need a different model – one where the biometric signature is stored server-side.
3. Spoofing threats
Another concern with biometric authentication systems is that the scanner devices have shown they are susceptible to spoofing. Hackers have succeeded in making scanners recognize fingerprints by using casts, molds, or otherwise replicas of valid user fingerprints or faces. Although liveness detection has come a long way, it is still far from perfect. Until spoof detection becomes more sophisticated, this risk will remain.
4. Biometric changes
The possibility of changes to users’ biometrics (injury to or loss of a fingerprint for instance, or a disfiguring injury to the face) is another potential issue, especially in the case where biometric authentication is the only authentication method in use and there is no fallback available.
If a breach happens due to biometric authentication, once a cybercriminal gains access, they can then change the logins for these accounts and lock the legitimate user out of their account. This puts the onus on organizations to alert users to take immediate action to mitigate the risk. If there is a breach, both enterprises and users should immediately turn off biometrics on their devices and revert back to the default, usually passwords or passcodes.
Adopting a layered approach to authentication
Rather than searching for a magic bullet for authentication, organizations need to embrace a layered approach to security. In the physical world, you would never rely solely on one solution and in the digital world, you should adopt the same philosophy. In addition to this layered approach, organizations should focus on hardening every element to shore up their digital defenses.
The simplicity and convenience of biometrics will ensure that it continues to be an appealing option for both enterprises and users. However, relying solely on biometric authentication is a high-risk strategy due to the limitations outlined above. Instead, organizations should deploy biometrics selectively as part of the overall identity management strategy, but they must include other security elements to mitigate the potential risks. It’s clear that, despite the buzz, 2021 will not be the year that biometrics replace passwords.
Love them or loathe them, passwords will remain a fixture in our digital lives.
Organizations underwent an unprecedented IT change this year amid a massive shift to remote work, accelerating adoption of cloud technology, Duo Security reveals.
The security implications of this transition will reverberate for years to come, as the hybrid workplace demands the workforce to be secure, connected and productive from anywhere.
The report details how organizations, with a mandate to rapidly transition their entire workforce to remote, turned to remote access technologies such as VPN and RDP, among numerous other efforts.
As a result, authentication activity to these technologies swelled 60%. A complementary survey recently found that 96% of organizations made cybersecurity policy changes during the COVID-19, with more than half implementing MFA.
Cloud adoption also accelerated
Daily authentications to cloud applications surged 40% during the first few months of the pandemic, the bulk of which came from enterprise and mid-sized organizations looking to ensure secure access to various cloud services.
As organizations scrambled to acquire the requisite equipment to support remote work, employees relied on personal or unmanaged devices in the interim. Consequently, blocked access attempts due to out-of-date devices skyrocketed 90% in March. That figure fell precipitously in April, indicating healthier devices and decreased risk of breach due to malware.
“As the pandemic began, the priority for many organizations was keeping the lights on and accepting risk in order to accomplish this end,” said Dave Lewis, Global Advisory CISO, Duo Security at Cisco. “Attention has now turned towards lessening risk by implementing a more mature and modern security approach that accounts for a traditional corporate perimeter that has been completely upended.”
Additional report findings
So long, SMS – The prevalence of SIM-swapping attacks has driven organizations to strengthen their authentication schemes. Year-over-year, the percentage of organizations that enforce a policy to disallow SMS authentication nearly doubled from 8.7% to 16.1%.
Biometrics booming – Biometrics are nearly ubiquitous across enterprise users, paving the way for a passwordless future. Eighty percent of mobile devices used for work have biometrics configured, up 12% the past five years.
Cloud apps on pace to pass on-premises apps – Use of cloud apps are on pace to surpass use of on-premises apps by next year, accelerated by the shift to remote work. Cloud applications make up 13.2% of total authentications, a 5.4% increase year-over-year, while on-premises applications encompass 18.5% of total authentications, down 1.5% since last year.
Apple devices 3.5 times more likely to update quickly vs. Android – Ecosystem differences have security consequences. On June 1, Apple iOS and Android both issued software updates to patch critical vulnerabilities in their respective operating systems.
iOS devices were 3.5 times more likely to be updated within 30 days of a security update or patch, compared to Android.
Windows 7 lingers in healthcare despite security risks – More than 30% of Windows devices in healthcare organizations still run Windows 7, despite end-of-life status, compared with 10% of organizations across Duo’s customer base.
Healthcare providers are often unable to update deprecated operating systems due to compliance requirements and restrictive terms and conditions of third-party software vendors.
Windows devices, Chrome browser dominate business IT – Windows continues its dominance in the enterprise, accounting for 59% of devices used to access protected applications, followed by macOS at 23%. Overall, mobile devices account for 15% of corporate access (iOS: 11.4%, Android: 3.7%).
On the browser side, Chrome is king with 44% of total browser authentications, resulting in stronger security hygiene overall for organizations.
UK and EU trail US in securing cloud – United Kingdom and European Union-based organizations trail US-based enterprises in user authentications to cloud applications, signaling less cloud use overall or a larger share of applications not protected by MFA.
In the aftermath of the COVID-19 pandemic, global biometric device revenues are expected to drop 22%, ($1.8 billion) to $6.6 billion, according to a report from ABI Research. The entire biometrics market, however, will regain momentum in 2021 and is expected to reach approximately $40 billion in total revenues by 2025.
Global biometric device revenues in 2020
“The current decline in the biometrics market landscape stems from multifaceted challenges from a governmental, commercial, and technological nature,” explains Dimitris Pavlakis, Digital Security Industry Analyst.
“First, they have been instigated primarily due to economic reforms during the crisis which forced governments to constrain budgets and focus on damage control, personnel well-being, and operational efficiency.
“Governments had to delay or temporarily cancel many fingerprint-based applications related to user/citizen and patient registration, physical access control, on-premise workforce management, and certain applications in border control or civil, welfare, immigration, law enforcement, and correctional facilities.
“Second, commercial on-premise applications and access control suffered as the rise of the remote workers became the new norm for the first half of 2020. Lastly, hygiene concerns due to contact-based fingerprint technologies pummelled biometrics revenues forcing a sudden drop in fingerprint shipments worldwide.”
Not all is bleak, though
New use-case scenarios have emerged, and certain technological trends have risen to the top of the implementation lists. For example, enterprise mobility and logical access control using biometrics as part of multi-factor authentication (MFA) for remote workers.
“Current MFA applications for remote workers might well translate into permanent information technology security authentication measures in the long term,” says Pavlakis. “This will improve biometrics-as-a-service (BaaS) monetization and authentication models down the line.”
Biometrics applications can now look toward new implementation horizons, with market leaders and pioneering companies like Gemalto (Thales), IDEMIA, NEC, FPC, HID Global, and Cognitec at the forefront of innovation.
“Future smart city infrastructure investments will now factor in additional surveillance, real-time behavioral analytics, and face recognition for epidemiological research, monitoring, and emergency response endeavors,” Pavlakis concludes.
Detecting Deep Fakes with a Heartbeat
Researchers can detect deep fakes because they don’t convincingly mimic human blood circulation in the face:
In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.
Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.
The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.
Of course, this is an arms race. I expect deep fake programs to become good enough to fool FakeCatcher in a few months.
Sidebar photo of Bruce Schneier by Joe MacInnis.
In the 1960s, Woodrow W. Bledsoe created a secret program that manually identified points on a person’s face and compared the distances between these coordinates with other images.
Facial recognition technology has come a long way since then. The field has evolved quickly and software can now automatically process staggering amounts of facial data in real time, dramatically improving the results (and reliability) of matching across a variety of use cases.
Despite all of the advancements we’ve seen, many organizations still rely on the same algorithm used by Bledsoe’s database – known as “k-nearest neighbors” or k-NN. Since each face has multiple coordinates, a comparison of these distances over millions of facial images requires significant data processing. The k-NN algorithm simplifies this process and makes matching these points easier by considerably reducing the data set. But that’s only part of the equation. Facial recognition also involves finding the location of a feature on a face before evaluating it. This requires a different algorithm such as HOG (we’ll get to it later).
The algorithms used for facial recognition today rely heavily on machine learning (ML) models, which require significant training. Unfortunately, the training process can result in biases in these technologies. If the training doesn’t contain a representative sample of the population, ML will fail to correctly identify the missed population.
While this may not be a significant problem when matching faces for social media platforms, it can be far more damaging when the facial recognition software from Amazon, Google, Clearview AI and others is used by government agencies and law enforcement.
Previous studies on this topic found that facial recognition software suffers from racial biases, but overall, the research on bias has been thin. The consequences of such biases can be dire for both people and companies. Further complicating matters is the fact that even small changes to one’s face, hair or makeup can impact a model’s ability to accurately match faces. If not accounted for, this can create distinct challenges when trying to leverage facial recognition technology to identify women, who generally tend to use beauty and self-care products more than men.
Understanding sexism in facial recognition software
Just how bad are gender-based misidentifications? Our team at WatchGuard conducted some additional facial recognition research, looking solely at gender biases to find out. The results were eye-opening. The solutions we evaluated was misidentifying women 18% more often than men.
You can imagine the terrible consequences this type of bias could generate. For example, a smartphone relying on face recognition could block access, a police officer using facial recognition software could mistakenly identify an innocent bystander as a criminal, or a government agency might call in the wrong person for questioning based on a false match. The list goes on. The reality is that the culprit behind these issues is bias within model training that creates biases in the results.
Let’s explore how we uncovered these results.
Our team performed two separate tests – first using Amazon Rekognition and the second using Dlib. Unfortunately, with Amazon Rekognition we were unable to unpack just how their ML modeling and algorithm works due to transparency issues (although we assume it’s similar to Dlib). Dlib is a different story, and uses local resources to identify faces provided to it. It comes pretrained to identify the location of a face, and with face location finder HOG, a slower CPU-based algorithm, and CNN, a faster algorithm making use of specialized processors found in a graphics cards.
Both services provide match results with additional information. Besides the match found, a similarity score is given that shows how close a face must match to the known face. If the face on file doesn’t exist, a similarity score set to low may incorrectly match a face. However, a face can have a low similarity score and still match when the image doesn’t show the face clearly.
For the data set, we used a database of faces called Labeled Faces in the Wild, and we only investigated faces that matched another face in the database. This allowed us to test matching faces and similarity scores at the same time.
Amazon Rekognition correctly identified all pictures we provided. However, when we looked more closely at the data provided, our team saw a wider distribution of the similarities in female faces than in males. We saw more female faces with higher similarities then men and more female faces with less similarities than men (this actually matches a recent study performed around the same time).
What does this mean? Essentially it means a female face not found in the database is more likely to provide a false match. Also, because of the lower similarity in female faces, our team was confident that we’d see more errors in identifying female faces over male if given enough images with faces.
Amazon Rekognition gave accurate results but lacked in consistency and precision between male and female faces. Male faces on average were 99.06% similar, but female faces on average were 98.43% similar. This might not seem like a big variance, but the gap widened when we looked at the outliers – a standard deviation of 1.64 for males versus 2.83 for females. More female faces fall farther from the average then male faces, meaning female false match is far more likely than the 0.6% difference based on our data.
Dlib didn’t perform as well. On average, Dlib misidentified female faces more than male, leading to an average rate of 5% more misidentified females. When comparing faces using the slower HOG, the differences grew to 18%. Of interest, our team found that on average, female faces have higher similarity scores then male when using Dlib, but like Amazon Rekognition, also have a larger spectrum of similarity scores leading to the low results we found in accuracy.
Tackling facial recognition bias
Unfortunately, facial recognition software providers struggle to be transparent when it comes to the efficacy of their solutions. For example, our team didn’t find any place in Amazon’s documentation in which users could review the processing results before the software made a positive or negative match.
Unfortunately, this assumption of accuracy (and lack of context from providers) will likely lead to more and more instances of unwarranted arrests, like this one. It’s highly unlikely that facial recognition models will reach 100% accuracy anytime soon, but industry participants must focus on improving their effectiveness nonetheless. Knowing that these programs contain biases today, law enforcement and other organizations should use them as one of many tools – not as a definitive resource.
But there is hope. If the industry can honestly acknowledge and address the biases in facial recognition software, we can work together to improve model training and outcomes, which can help reduce misidentifications not only based on gender, but race and other variables, too.
A large percentage of Americans currently do not take the necessary steps to protect their passwords and logins online, FICO reveals.
As consumers reliance on online services grows in response to COVID-19, the study examined the steps Americans are taking to protect their financial information online, as well as attitudes towards increased digital services and alternative security options such as behavioral biometrics.
Do you use a password manager?
The study found that a large percentage of Americans are not taking the necessary precautions to secure their information online. For example, only 42 percent are using separate passwords to access multiple accounts; 17 percent of respondents have between two to five passwords they reuse across accounts; and 4 percent use a single password across all accounts.
Additionally, less than a quarter (23 percent) of respondents use an encrypted password manager which many consider best practice; 30 percent are using high risk strategies such as writing their passwords down in a notebook. If you’re a security leader and your organization is still not using a password manager, find out how to evaluate a password management solution for business purposes.
“We’re seeing more cyber criminals targeting consumers with COVID-19 related phishing and social engineering,” said Liz Lasher, vice president of fraud portfolio marketing at FICO.
“Because of the current situation, many consumers are only able to access their finances digitally, so it’s vital to remain vigilant against such scams and take the right precautions to protect themselves digitally.”
A forgotten password can affect online purchases
The study shows that consumers struggle with maintaining their current passwords as 28 percent reported abandoning an online purchase because they forgot login information, and 26 percent reported being unable to check an account balance.
Forgotten usernames and passwords even affect new account openings, 13 percent said that it has stopped them from opening a new account with an existing provider.
This is a notable trend as consumers are more willing than ever to do business digitally. The study found that the majority of respondents would open a checking (52 percent) or mobile phone (64 percent) account online, while an overwhelming majority of respondents (82 percent) said they would open a credit card account online.
Consumers trusting physical and behavioral biometrics
However, while there is significant room to improve how consumers protect their login credentials, the survey also found that Americans are becoming more trusting of using physical and behavioral biometrics to secure their financial accounts.
The survey found that 78 percent of respondents said they would be happy for their bank to analyze behavioral biometrics – such as how you type – for security and 65 percent are happy to provide biometrics to their bank; while 60 percent are open to using fingerprint scans to secure their accounts.
Additionally, when logging into their mobile banking apps, respondents are now considering alternative security measures beyond the traditional username and password. The five most widely used security alternatives are:
- One-time passcode via SMS (53 percent)
- One-time passcode via email (43 percent)
- Fingerprint scan (39 percent)
- Facial Scan (24 percent)
- One-time passcode delivered and spoken to mobile phone (23 percent)
“Digital services are currently playing a critical role in daily life. It is a good time to evaluate how we protect ourselves and our information online,” said Lasher.
“Customers have been happy to adopt security such as one-time passcodes, and are now showing that they are willing to adopt additional options, such as biometrics, to protect their accounts.
“There are no magic bullets and the ability to layer and deploy multiple authentication methods appropriate to each occasion is key. Financial services organizations and consumers need to continue to keep security best practices top of mind to help combat fraudsters now and in the future.”
The pandemic is expected to cause a significant pushback on biometric device shipments, creating a major revenue drop of $2 billion over the course of 2020, according to ABI Research.
New identification and surveillance needs
At the same, the pandemic has given rise to new identification and surveillance needs, spurring further investments in biometric AI algorithm design, which will give a boost to the face recognition technologies market going forward.
“Contact biometric technologies like fingerprint and vein have been dealt a substantial blow due to new governmental regulations targeting contact and close-proximity interactions. Fingerprint biometrics vendors are struggling to uphold the new stringent hygiene and infectious control protocols.
“These regulations have been correctly introduced for the safety of users and personnel, but they have also affected sales in certain verticals,” explains Dimitrios Pavlakis, Digital Security Analyst at ABI Research.
“On-premises physical access control, user registration, identification, and workforce management systems have been greatly affected in the enterprise and commercial space, but these applications also spread into healthcare, law enforcement, border control, government, civil, and welfare,” Pavlakis adds.
Contact-less fingerprint sensing technologies
While contact-only companies will have additional hurdles to overcome in most markets, innovative companies like Gemalto and IDEMIA have already adapted their solutions offering contact-less fingerprint sensing technologies.
Additionally, fingerprint sensor vendors operating in consumer markets like FPC and Goodix will be mostly affected by smartphone sales, rather than hygiene concerns, due to the personal nature of user authentication.
The total biometric device market is expected to reach $28.2 billion in 2020, with the government and security market taking a significant hit of $1.1 billion. Fingerprint device sales are also expected to decrease in 2020 by $1.2 billion. Not all is bleak, however.
“AI biometric firms are adapting to the biological threat. Biometric technologies are currently undergoing a forced evolution rather than an organic one, with artificial intelligence biometric firms spearheading the charge,” says Pavlakis.
“New IoT and smart city-focused applications will enable new data streams and analytics, monitoring infection rates in real-time, forcing new data-sharing initiatives, and even applying behavioral AI models to predict future outbreaks.”
Face and iris recognition, temperature and fever detection
Face and iris recognition have been brought into the spotlight as key technologies allowing authentication, identification, and surveillance operations for users and citizens wearing protective headgear, face masks, or, with partially covered faces.
These elements that were the bane of face recognition algorithms in the past have now been integrated into algorithm developers’ value proposition followed by further investment boost targeted at surveillance, video analytics and smart city applications.
Temperature and fever detection technologies making use of infrared technologies have also been retrofitted in access and border control while biometric telemedicine applications are providing healthcare support to consumers and patients remotely. AI investments have been primarily instigated by leading Chinese firms like SenseTime, Megvii, Alibaba, and Baidu.
IT security practitioners are aware of good habits when it comes to strong authentication and password management, yet often fail to implement them due to poor usability or inconvenience, according to Yubico and Ponemon Institute.
The conclusion is that IT security practitioners and individuals are both engaging in risky password and authentication practices, yet expectation and reality are often misaligned when it comes to the implementation of usable and desirable security solutions.
The tools and processes that organizations put in place are not widely adopted by employees or customers, making it abundantly clear that new technologies are needed for enterprises and individuals to reach a safer future together.
“IT professional or not, people do not want to be burdened with security — it has to be usable, simple, and work instantly,” said Stina Ehrensvärd, CEO and Co-Founder, Yubico.
“For years, achieving a balance between high security and ease of use was near impossible, but new authentication technologies are finally bridging the gap. With the availability of passwordless login and security keys, it’s time for businesses to step up their security options. Organizations can do far better than passwords; in fact, users are demanding it.”
Individuals report better security practices in some instances compared to IT pros
Out of the 35% of individuals who report that they have been victim of an account takeover, 76% changed how they managed their passwords or protected their accounts. Of the 20% of IT security respondents who have been a victim of an account takeover, 65% changed how they managed their passwords or protected their accounts.
Both individuals and IT security respondents have reused passwords on an average of 10 of their personal accounts, but individual users (39%) are less likely to reuse passwords across workplace accounts than IT professionals (50%).
Poor password hygiene
Fifty-one percent of IT security respondents say their organizations have experienced a phishing attack, with another 12% of respondents stating that their organizations experienced credential theft, and 8% say it was a man-in-the-middle attack.
Yet, only 53% of IT security respondents say their organizations have changed how passwords or protected corporate accounts were managed. Interestingly enough, individuals reuse passwords across an average of 16 workplace accounts and IT security respondents say they reuse passwords across an average of 12 workplace accounts.
Mobile use is on the rise
Fifty-five percent of IT security respondents report that the use of personal mobile devices is permitted at work and an average of 45% of employees in the organizations represented are using their mobile device for work.
Alarmingly, 62% of IT security respondents say their organizations don’t take necessary steps to protect information on mobile phones. Fifty-one percent of individuals use their personal mobile device to access work related items, and of these, 56% don’t use two-factor authentication (2FA).
Poor employee access protection
Given the complexities of securing a modern, mobile workforce, organizations struggle to find simple, yet effective ways of protecting employee access to corporate accounts. Roughly half of all respondents (49% of IT security and 51% of individuals) share passwords with colleagues to access business accounts.
Fifty-nine percent of IT security respondents report that their organization relies on human memory to manage passwords, while 42% say sticky notes are used. Only 31% of IT security respondents say that their organization uses a password manager, which are effective tools to securely create, manage, and store passwords.
Concerns about customer information and PII security
IT security respondents say they are most concerned about protecting customer information and personally identifiable information (PII). However, 59% of IT security respondents say customer accounts have been subject to an account takeover. Despite this, 25% of IT security respondents say their organizations have no plans to adopt 2FA for customers.
Of these 25% of IT security respondents, 60% say their organizations believe usernames and passwords provide sufficient security and 47% say their organizations are not going to provide 2FA because it will affect convenience by adding an extra step during login.
When businesses are choosing to protect customer accounts and data, the 2FA options that are used most often do not offer adequate protection for users.
Three main 2FA methods
IT security respondents report that SMS codes (41%), backup codes (40%), or mobile authentication apps (37%) are the three main 2FA methods that they support or plan to support for customers. SMS codes and mobile authenticator apps are typically tied to only one device.
Additionally, 23% of individuals find 2FA methods like SMS and mobile authentication apps to be very inconvenient. A majority of individuals rate security (56%), affordability (57%), and ease of use (35%) as very important.
Individuals only adopting new technologies that are easy to use
It is clear that new technologies are needed for enterprises and individuals to reach a safer future together. Across the board, passwords are cumbersome, mobile use introduces a new set of security challenges, and the security tools that organizations have put in place are not being widely adopted by employees or customers.
In fact, 49% of individuals say that they would like to improve the security of their accounts and have already added extra layers of protection beyond a username and password.
However, 56% of individuals will only adopt new technologies that are easy to use and significantly improve account security. Here’s what is preferred: biometrics, security keys, and password-free login.
Passwordless methods are preferred
A majority of IT security respondents and individuals (55%) would prefer a method of protecting accounts that doesn’t involve passwords. Both IT security (65%) and individual users (53%) believe the use of biometrics would increase the security of their organization or accounts.
And lastly, 56% of individuals and 52% of IT security professionals believe a hardware token would offer better security.
The proliferation of real-time payments platforms, including person-to-person (P2P) transfers and mobile payment platforms across Asia Pacific, has increased fraud losses for the majority of banks.
FICO recently conducted a survey with banks in the region and found that 4 out of 5 (78 percent) have seen their fraud losses increase.
Further to this, almost a quarter (22 percent) say that fraud will rise significantly in the next 12 months, with an additional 58 percent saying they expect a moderate rise in fraud.
“While the convenience of real-time payments is great news for customers, increasingly, banks have zero time to clear a transaction or payment. AI can’t slow down the clock, but it can help create systems that are radically quicker to recognize a transaction that smells likely to be fraudulent,” said Dan McConaghy, president of FICO in Asia Pacific.
“Banks will need to move beyond passwords and OTPs and add biometrics, device telemetry and customer behavior analytics to keep up with the changing payments landscape.”
Authentication and identity tech
When asked which identity and authentication strategies they used, the majority of APAC banks have a strategy of multi-factor authentication (84 percent). They increasingly use a wide range of authentication methods including: biometrics (64 percent), normal passwords (62 percent) and in last place behavioral authentication (38 percent).
Interestingly, nearly half of the respondents (46 percent) are currently only using 1 or 2 of these strategies, potentially leaving them more exposed to attack vectors such as identity theft, account takeovers, cyberattacks.
“Why try to crack a safe when you can walk in the front door?” explained McConaghy.
“Criminals are trying to fool banks into thinking they are new customers or stealing account access by tricking people into making security mistakes or giving away sensitive information. When they are successful, criminals are making use of real-time payments to move funds quickly through a maze of global accounts.”
The survey bore this out with 40 percent of banks naming social engineering as the number one fraud concern when it comes to real-time payments. Account takeovers were ranked second, with false accounts and money mules also rated as problems.
New forms of biometric, multi-factor and behavioral technologies allow banks to stop payments being made, even if an account appears to be using the correct but stolen password or entering the right, but intercepted, one-time-password.
“Beyond this type of account take over, we also have authorized push payment fraud, such as when a customer is tricked into paying what they think is a legitimate invoice like a fake school bill or payment to a tradesperson,” said McConaghy.
“This type of social engineering is harder to stop but better KYC, link analysis to find money mule accounts and behavioral analytics to flag new accounts for a regular payee, are all examples of how to tackle it.”
Mitigating criminal behavior
Further to stopping fraud in real-time payment platforms, crimes such as drug trafficking, human smuggling, tax evasion and terrorism finance are also attracted to the irrevocable nature of instant payments.
The lack of visibility between jurisdictions has seen regulators encouraging banks to move quickly in this cross-border payments space to ensure payments are compliant and secure.
In terms of mitigating this criminal behavior, more than 90 percent of APAC banks surveyed thought that convergence between their fraud and compliance functions would be helpful in defending transactions on real-time payments platforms.
“We estimate that there is about an 80 percent overlap in software functionality between legacy fraud and anti-money laundering systems,” added McConaghy.
“To tackle fraud and money laundering schemes that exploit real-time money movement you need to leverage all the available technologies, automate as much as you can and introduce models that can identify outlier transactions and customer behavior so your teams can spend their time investigating the riskiest of the red flags.”
A new report from Juniper Research found that facial recognition hardware, such as Face ID on recent iPhones, will be the fastest growing form of smartphone biometric hardware. This means it will reach over 800 million in 2024, compared to an estimated 96 million in 2019.
The new research, Mobile Payment Authentication: Biometrics, Regulation & Forecasts 2019-2024, however notes that the majority of smartphone facial recognition will be software-based, with over 1.3 billion devices having that capability by 2024.
This is made possible by advances in AI, with companies like iProov and Mastercard offering facial recognition authentication that is strong enough to be used for payment and other high-end authentication tasks.
Juniper Research recommends that all vendors embrace AI to drive further developments of capabilities and therefore increase customer acquisition.
Fingerprints to lead remote commerce authentication
The research also found that despite the ubiquitous nature of selfie cameras, fingerprint hardware will remain a dominant element in biometric payments, as sensors expand to emerging markets. Juniper Research anticipates over 4.6 billion smartphones worldwide will have fingerprint sensors installed by 2024, although their usage for payment will be significantly lower than this.
This expansion of biometric capabilities will bring the technology to more eCommerce platforms, as retailers seek to meet enhanced security requirements. Originally envisioned for contactless payment use, the report expects over 60% of biometrically-authenticated payments in 2024 will be for authorizing remote payments.
As the longest running biometric modality, fingerprint payments will take the lead in this market as standards coalesce around the technology more easily than for facial recognition payments.
“Many consumers are now used to making fingerprint-based biometric payments, both for contactless and remote payments,” remarked research author James Moar, a Lead Analyst at Juniper Research. “That familiarity and continued inclusion in smartphones will make it hard to displace in many markets.”
Policing in the 21st Century is obviously changing rapidly. New technological advances are fundamentally changing the way in which police forces and related government entities can track, locate, and collect evidence.
Two game changing technologies – working together – perhaps underpin the greatest tool for policing the world over. The combination of high-resolution digital video capture and facial recognition. Both sit at the crux of future policing and bring new societal change.
Projecting forward, what could the next decade or two hold?
It’s easy to get in to the realm of Science Fiction and dystopian futures, but when I consider some of the social impact (and “opportunities”) such technologies can bring, it naturally feels like the premise for several short stories.
One: Peaking Under Facial Obfuscation
As I watch news of the “Yellow Vest” riots in Paris, it is inevitable that high-resolution digital video capture of protesters – combined with facial recognition – will mean that many group protest actions will become individually attributable. While the face of the perpetrator may not initially be tied to an identity, a portfolio of digital captures can be compiled and (at some future date) associated with the named individual.
New technologies in the realm of full-color night-vision video capture and advanced infrared heat-based body and face mapping lie the basis of radically better tools for associating captured maleficence to an individual. Combined with the work being done with infrared facial recognition, and we’ll soon find that scarfs, balaclavas, or even helmets will cease to protect the identity of the perpetrator.
Two: Perpetual Digital Trail
Many large cities are approaching the point that it is impossible to stand in any public space or thoroughfare and not be captured by at least one video camera. Capitalizing on this, many metropolitan police forces are already able to real-time track a surveilled individual or entity through their networked cameras. In addition, some police forces have already combined such capabilities with facial recognition to quickly spot wanted individuals or suspects in crowds and track their movements across cameras in real-time.
We can expect the density and prevalence of cameras to grow. We can also expect that the video captured from these cameras to increasingly move to the cloud and be retained indefinitely. The AI tooling today already enables us to intelligently stitch together all the video content and construct a historical trail for any person or physical entity.
When combined with (One), it means that in the near future police forces could track (both forward and reverse in time) each suspect – identifying not only the history of travel and events preceding the crime, but also their origination (e.g. home) address… and from there, arrive at an identity conclusion. Imagine doing this for thousands of protesters simultaneously. Obviously, such a capability would also facilitate the capture of facial images of the suspect before they donned any masks or facial obfuscation tools they used during the protest or crime.
Three: Inferring Pending Harm and Stress
As digital video cameras radically increase their capture resolution – moving from an average of 640×480 to 3096×2160 (4k Ultra HD) and beyond over the coming years; facial recognition accuracy will obviously improve, but we can expect other personal traits and characteristics to be accurately inferred.
Studies of human movement and tooling for life-like movements in digital movies naturally lend to the ability to identify changes in an individuals movements and infer certain things. For example, being able to guess the relative weight and flexibility of contents within a backpack being worn by the surveilled individual, the presence of heavy objects being carried within a suit jacket, or changes in contents of a bag carried in-hand.
If we also assume that the surveilled individual will be caught by multiple cameras from different locations, the combination of angles and perspective will further help define the unique characteristics and load of the individual. Police forces could use that intelligence to infer the presence of weapons for example.
Higher resolution monitoring also lends itself to identifying and measuring other more personal attributes of the person being monitored. For example, details on the type and complexity of jewelry being worn, tattoos, or unique visible identifiers.
Using physical tells such as sweat density, head-bobbing (see an earlier blog – Body Worn Camera Technologies), and heart-rate, it will be possible to identify whether the person is stressed, under duress, or has recently exerted themselves.
Four: The Reverse Paradigm
Such technologies are not exclusive to police forces and government departments. It is inevitable that these technologies will also be leveraged by civilians and criminals alike – bringing a rather new dynamic to future policing.
For example, what happens if anti-riot police cannot obfuscate their identity during a riot – despite balaclavas and protective helmets? If every retaliatory baton strike, angered shield charge, tear gas spray, or taser use can be individually attributed to the officer that did it (and the officer can be identified by face capture or uniform number), officers become individually responsible and ultimately accountable for their actions.
Imagine for instance that every office present now had a virtual folder of video captures detailing their actions during the riots. With little effort and likely a little crowd-sourcing, each officer’s identity would be discovered and publicly associated with their actions.
It would be reasonable to expect that police officers would adjust their responses to riot situations – and its likely that many would not want to expose themselves to such risk. While it is possible new or existing laws could be used to protect officers from most legal consequences of “doing their job”, the social consequences may be a different story.
We’ve already seen how doxing can severely affect the life and safety of those that are victims, it would be reasonable to assume that some percentage of a rioting population would be only too eager to publish a police officers “crimes” along with all their identity, address, and any other personal data they could find. Would we expect police officers with young families take on the risk of riotous agitators arriving at their families doorstep – looking for vengeance.
Five: Ubiquitous Consumer-led Video Surveillance
A question will inevitably be raised that the ability to construct digital trails or infer motivations and harm will be restricted to police and government entities. After all, they’re the ones with the budget and authority to install the cameras.
These challenges may be overcome. For example, new legal test cases may force governments to make such video feeds publicly available – after all, funding is through public money. We’ve seen such shifts in technology access before – e.g. GPS, satellite mapping, satellite imagery.
An interesting model for consumer video that could become even more complete and ubiquitous video capture (and analytics) may be relatively simple.
Just as today’s home security systems have advanced to include multiple internal and external high-resolution video feeds to the cloud, maybe the paradigm changes again. Instead of a managed security monitoring service, things become community based and crowd sourced.
For example, lets say that for each high-resolution video camera you install and connect to a community cloud service, you gain improved access to the accumulated mesh of every other contributing camera. Overlaying that mesh of camera feeds, additional services augment the video with social knowledge and identity information, and the ability to trace movements around an event – just like the police could – for a small fee. The resultant social dynamics would be very interesting… does privacy end at your doorstep? If every crime in a public space is captured and perpetrators labeled, does that make petty and premeditated crime disappear? Does local policing shift to community groups and vigilantes?
Advances in high-resolution digital video cameras and facial recognition will play a critical role in how policing society changes over the next couple of decades.
The anticipated advancements will certainly make it easier for police forces to track, identify, and hold accountable those that perpetuate crimes. But such technologies will also inevitably be likewise utilized by those being policed. While the illustrated scenarios began with the recent riots in France, the repercussions for police forces in Mexico facing wealthy cartels would perhaps be more dire.
It is too early to tell whether the ubiquity of these advancing technologies will tilt the hand in one direction or the other or whether we’ll reach a new technology stalemate. Accountability for an individuals actions – backed by proof – feels like the right societal movement, but opens doors to entirely new forms of abuse.
— Gunter Ollmann