Google aims to improve security of browser engines, third-party Android devices and apps on Google Play
Researchers must also bear the costs of fuzzing in advance, even though there’s a possibility their approach may not discover any bugs or if it does, that they’ll receive a reward for finding them. This fact might deter many of them and, consequently, bugs stay unfixed and exploitable for longer.
That’s why Google is offering $5,000 research grants in the form of Google Compute Engine credits.
Helping third parties in the Android ecosystem
The company is also set on improving the security of the Android ecosystem, and to that point it’s launching the Android Partner Vulnerability Initiative (APVI).
“Until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP (Android Open Source Project) code that are unique to a much smaller set of specific Android OEMs,” the company explained.
“The APVI […] covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).”
Already discovered issues and those yet to be unearthed have been/will be shared through this bug tracker.
Simultaneously, the company has is looking for a Security Engineering Manager in Android Security that will, among other things, lead a team that “will perform application security assessments against highly sensitive, third party Android apps on Google Play, working to identify vulnerabilities and provide remediation guidance to impacted application developers.”
Columbia University researchers have released Crylogger, an open source dynamic analysis tool that shows which Android apps feature cryptographic vulnerabilities.
They also used it to test 1780 popular Android apps from the Google Play Store, and the results were abysmal:
- All apps break at least one of the 26 crypto rules
- 1775 apps use an unsafe pseudorandom number generator (PRNG)
- 1,764 apps use a broken hash function (SHA1, MD2, MD5, etc.)
- 1,076 apps use the CBC operation mode (which is vulnerable to padding oracle attacks in client-server scenarios)
- 820 apps use a static symmetric encryption key (hardcoded)
Each of the tested apps with an instrumented crypto library were run in Crylogger, which logs the parameters that are passed to the crypto APIs during the execution and then checks their legitimacy offline by using a list of crypto rules.
“Cryptographic (crypto) algorithms are the essential ingredients of all secure systems: crypto hash functions and encryption algorithms, for example, can guarantee properties such as integrity and confidentiality,” the researchers explained.
“A crypto misuse is an invocation to a crypto API that does not respect common security guidelines, such as those suggested by cryptographers or organizations like NIST and IETF.”
To confirm that the cryptographic vulnerabilities flagged by Crylogger can actually be exploited, the researchers manually reverse-engineered 28 of the tested apps and found that 14 of them are vulnerable to attacks (even though some issues may be considered out-of-scope by developers because they require privilege escalation for effective exploitation).
Comparing the results of Crylogger (a dynamic analysis tool) with those of CryptoGuard (an open source static analysis tool for detecting crypto misuses in Java-based applications) when testing 150 apps, the researchers found that the former flags some issues that the latter misses, and vice versa.
The best thing for developers would be to test their applications with both before they offer them for download, the researchers noted. Also, Crylogger can be used to check apps submitted to app stores.
“Using a dynamic tool on a large number of apps is hard, but Crylogger can refine the misuses identified with static analysis because, typically, many of them are false positives that cannot be discarded manually on such a large number of apps,” they concluded.
As noted at the beginning of this piece, too many apps break too many cryptographic rules. What’s more, too many app and library developers are choosing to effectively ignore these problems.
The researchers emailed 306 developers of Android apps that violate 9 or more of the crypto rules: only 18 developers answered back, and only 8 of them continued to communicate after that first email and provided useful feedback on their findings. They also contacted 6 developers of popular Android libraries and received answers from 2 of them.
The researchers chose not to reveal the names of the vulnerable apps and libraries because they fear that information would benefit attackers, but they shared enough to show that these issues affect all types of apps: from media streaming and newspaper apps, to file and password managers, authentication apps, messaging apps, and so on.
Among the rights bestowed upon EU citizens by the General Data Protection Regulation (GDPR) is the right to access their personal data stored by companies (i.e., data controllers) and information about how this personal data is being processed. A group of academics from three German universities has decided to investigate whether and how mobile app vendors respond to subject access requests, and the results of their four-year undercover field study are dispiriting.
The results of the study
“In three iterations between 2015 and 2019, we sent subject access requests to vendors of 225 mobile apps popular in Germany. Throughout the iterations, 19 to 26 % of the vendors were unreachable or did not reply at all. Our subject access requests were fulfilled in 15 to 53 % of the cases, with an unexpected decline between the GDPR enforcement date and the end of our study,” they shared.
“The remaining responses exhibit a long list of shortcomings, including severe violations of information security and data protection principles. Some responses even contained deceptive and misleading statements (7 to 13 %). Further, 9 % of the apps were discontinued and 27 % of the user accounts vanished during our study, mostly without proper notification about the consequences for our personal data.”
The researchers – Jacob Leon Kröger from TU Berlin (Weizenbaum Institute), Jens Lindemann from the University of Hamburg, and Prof. Dr. Dominik Herrmann from the University of Bamberg – made sure to test a representative sample of iOS and Android apps: popular and less popular, from a variety of app categories, and from vendors based in Germany, the EU, and outside of the EU.
They disguised themselves as an ordinary German user, created accounts needed for the apps to work, interacted with each app for about ten minutes, and asked app providers for information about their stored personal data (before and after GDPR enforcement).
They also used different a request text for each round of inquiries. The first one was more informal, while the last two were more elaborate and included references to relevant data protection laws and a warning that the responsible data protection authorities would be notified in the case of no response.
“While we cannot precisely determine their individual influence, it can be assumed that both the introduction of the GDPR as well as the more formal and threatening tone of our inquiry in [the latter two inquiries] had an impact on the vendors’ behavior,” they noted.
Solving the problem
Smartphones are ubiquitous and most users use a variety of mobile apps, which usually collect personal user data and share it with third parties.
In theory, the GDPR should force mobile app vendors to provide information about this data and how it’s used to users. In practice, though, many app vendors are obviously hoping that users won’t care enough about it and won’t make a stink when they don’t receive a satisfactory reply, and that GDPR regulators won’t have the resources to enforce the regulation.
“We (…) suspected that some vendors merely pretended to be poorly reachable when they received subject access requests – while others actually had insufficient resources to process incoming emails,” the researchers noted.
“To confirm this hypothesis, we tested how the vendors that failed to respond to our requests reacted to non-privacy related inquiries. Using another (different) fake identity, we emailed the vendors who had not replied [to the first inquiry] and [to the third inquiry], expressing interest in promoting their apps on a personal blog or YouTube channel. Out of the group of initial non-responders, 31 % [first inquiry] and 22 % [third inquiry] replied to these dummy requests, many of them within a few hours, proving that their email inbox was in fact being monitored.”
The researchers believe the situation for users can be improved by authorities doing random compliance checks and offering better support for data controllers through industry-specific guidelines and best practices.
“In particular, there should be mandatory standard interfaces for providing data exports and other privacy-related information to data subjects, obviating the need for the manual processing of GDPR requests,” they concluded.
The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of AI, especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.
TU Graz computer scientists Robert Legenstein and Wolfgang Maass (from left) © Lunghammer – TU Graz
Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the energy of a supercomputer. One of the reasons for this is the efficient transfer of information between neurons in the brain. Neurons send short electrical impulses (spikes) to other neurons – but, to save energy, only as often as absolutely necessary.
Event-based information processing
A working group led by two computer scientists Wolfgang Maass and Robert Legenstein of TU Graz has adopted this principle in the development of the new machine learning algorithm e-prop (short for e-propagation).
Researchers at the Institute of Theoretical Computer Science, which is also part of the European lighthouse project Human Brain Project, use spikes in their model for communication between neurons in an artificial neural network.
The spikes only become active when they are needed for information processing in the network. Learning is a particular challenge for such less active networks, since it takes longer observations to determine which neuron connections improve network performance.
Previous methods achieved too little learning success or required enormous storage space. E-prop now solves this problem by means of a decentralized method copied from the brain, in which each neuron documents when its connections were used in a so-called e-trace (eligibility trace).
The method is roughly as powerful as the best and most elaborate other known learning methods.
Online instead of offline
With many of the machine learning techniques currently in use, all network activities are stored centrally and offline in order to trace every few steps how the connections were used during the calculations.
However, this requires a constant data transfer between the memory and the processors – one of the main reasons for the excessive energy consumption of current AI implementations. e-prop, on the other hand, works completely online and does not require separate memory even in real operation – thus making learning much more energy efficient.
Driving force for neuromorphic hardware
Maass and Legenstein hope that e-prop will drive the development of a new generation of mobile learning computing systems that no longer need to be programmed but learn according to the model of the human brain and thus adapt to constantly changing requirements.
The goal is to no longer have these computing systems learn energy-intensively exclusively via a cloud, but to efficiently integrate the greater part of the learning ability into mobile hardware components and thus save energy.
First steps to bring e-prop into the application have already been made. For example, the TU Graz team is working together with the Advanced Processor Technologies Research Group (APT) of the University of Manchester in the Human Brain Project to integrate e-prop into the neuromorphic SpiNNaker system, which has been developed there.
At the same time, TU Graz is working with researchers from the semiconductor manufacturer Intel to integrate the algorithm into the next version of Intel’s neuromorphic chip Loihi.
A University of Texas at Dallas study of 100 mobile apps for kids found that 72 violated a federal law aimed at protecting children’s online privacy.
Dr. Kanad Basu, assistant professor of electrical and computer engineering in the Erik Jonsson School of Engineering and Computer Science and lead author of the study, along with colleagues elsewhere, developed a tool that can determine whether an Android game or other mobile app complies with the federal Children’s Online Privacy Protection Act (COPPA).
The researchers introduced and tested their “COPPA Tracking by Checking Hardware-Level Activity,” or COPPTCHA, tool in a study. The tool was 99% accurate. Researchers continue to improve the technology, which they plan to make available for download at no cost.
Games and other apps that violate COPPA pose privacy risks
Basu said games and other apps that violate COPPA pose privacy risks that could make it possible for someone to determine a child’s identity and location. He said the risk is heightened as more people are accessing apps from home, rather than public places, due to the COVID-19 pandemic.
“Suppose the app collects information showing that there is a child on Preston Road in Plano, Texas, downloading the app. A trafficker could potentially get the user’s email ID and geographic location and try to kidnap the child. It’s really, really scary,” Basu said.
Apps can access personal identifiable information, including names, email addresses, phone numbers, location, audio and visual recordings, and unique identifiers for devices such as an international mobile equipment identity (IMEI), media access control (MAC) addresses, Android ID and Android advertising ID.
The advertising ID, for example, allows app developers to collect information on users’ interests, which they can then sell to advertisers.
“When you download an app, it can access a lot of information on your cellphone,” Basu said. “You have to keep in mind that all this info can be collected by these apps and sent to third parties. What do they do with it? They can pretty much do anything. We should be careful about this.”
Protecting children’s online privacy
The researchers’ technique accesses a device’s special-purpose register, a type of temporary data-storage location within a microprocessor that monitors various aspects of the microprocessor’s function. Whenever an app transmits data, the activity leaves footprints that can be detected by the special-purpose register.
COPPA requires that websites and online services directed to children obtain parental consent before collecting personal information from anyone younger than 13; however, as Basu’s research found, many popular apps do not comply. He found that many popular games designed specifically for young children revealed users’ Android IDs, Android advertising IDs and device descriptions.
Basu recommends that parents use caution when downloading or allowing children to download apps.
“If your kid asks you to download a popular game app, you’re likely to download it,” Basu said. “A problem with our society is that many people are not aware of — or don’t care about – the threats in terms of privacy.”
Basu advises keeping downloads to a minimum.
“I try to limit my downloading of apps as much as possible,” Basu said. “I don’t download apps unless I need to.”
Security researchers have analyzed contact-tracing mobile apps from around the globe and found that their developers have generally failed to implement suitable security and privacy protections.
The results of the analysis
In an effort to stem the spread of COVID-19, governments are aiming to provide their citizenry with contact-tracing mobile apps. But, whether they are built by a government entity or by third-party developers contracted to do the job, security has largely taken a backseat to speed.
Guardsquare researchers have unpacked and decompiled 17 Android contact-tracing apps from 17 countries to see whether developers implement name obfuscation, string, asset/resource and class encryption. They’ve also checked to see whether the apps will run on rooted devices or emulators (virtual devices).
- Only 41% of the apps have root detection
- Only 41% include some level of name obfuscation
- Only 29% include string encryption
- Only 18% include emulator detection
- Only 6% include asset / resource encryption
- Only 6% include class encryption.
The percentages vary according to region (see above). Grant Goodes, Chief Scientist at Guardsquare, though made sure to note that they have not checked all existing contact-tracing apps, but that the sample they did test “provides a window into the security flaws most contact tracing apps contain.”
Security promotes trust
The looked-for protections should make it difficult for malicious actors to tamper with and “trojanize” the legitimate apps.
Name obfuscation, for example, hides identifiers in the application’s code to prevent hackers from reverse engineering and analyzing source code. String encryption prevents hackers from extracting API keys and cryptographic keys included in the source code, which could be used by attackers to decrypt sensitive data (for identity theft, blackmailing, and other purposes), or to spoof communications to the server (to disrupt the contact-tracing service).
Asset/resource encryption should prevent hackers from accessing/reusing files that the Android OS uses to render the look and feel of the application (e.g., screen-layouts, internationalized messages, etc.) and custom/low-level files that the application may need for its own purposes.
These security and privacy protections are important for every mobile app, not just contact-tracing apps, Goodes noted, but they are particularly salient for the latter, since some of them are mandatory for citizens to use and since their efficacy hinges on widespread adoption.
“When security flaws are publicized, the whole app is suddenly distrusted and its utility wanes as users drop off. In the case of countries who build their own apps, this can erode citizen trust in the government as well, which further increases public health risks,” he added.
The increased use of mobile banking apps due to the COVID-19 pandemic is sure to be followed by an increased prevalence of mobile banking threats: fake banking apps and banking Trojans disguised as those apps, the FBI has warned.
The pandemic and the resulting social distancing brought about many changes. Among them is a preference for using payment cards and electronic funds transfers instead of cash and an increased use of mobile devices to conduct banking activities.
“Studies of US financial data indicate a 50 percent surge in mobile banking since the beginning of 2020. Additionally, studies indicate 36 percent of Americans plan to use mobile tools to conduct banking activities, and 20 percent plan to visit branch locations less often,” the FBI pointed out.
Cyber criminals go where the money goes, so the agency expects them to increase their efforts to surreptitiously deliver information-stealing apps and banking Trojans to mobile users.
Banking Trojans are usually disguised as other popular apps – mobile games, utility apps, contact-tracing apps, etc. – while fake banking apps are apps that are made to look like the real deal. Both will harvest login credentials and, increasingly, second authentication factors (one-time passcodes) delivered via SMS or authenticator apps.
The FBI advises users to be careful when installing new apps. Third-party app stores should be avoided, but even official ones like Google Play can harbor malicious apps that have made it through the vetting process by employing different tricks to hide their malicious nature.
If you want to be sure that you’ll download the right mobile banking app, your best bet is to visit you bank’s website and download the app from there or follow the link they provide to the official app store where it’s hosted.
When downloading any new app, users should check the reviews and the provided developer info. They should also critically evaluate the permissions the app requests and ditch it if it asks for permissions it shouldn’t have (e.g., a wallpaper app that wants to access the user’s contacts or SMS messages).
The FBI also advises users to choose unique, strong passwords for banking apps, a password manager or password management service to “remember” them, and to enable two-factor or multi-factor authentication on devices and accounts where possible.
“Use strong two-factor authentication if possible via biometrics, hardware tokens, or authentication apps,” the agency urged, and warned not to give two-factor passcodes to anyone over the phone or via text.
“If you encounter an app that appears suspicious, exercise caution and contact that financial institution. Major financial institutions may ask for a banking PIN number, but will never ask for your username and password over the phone,” the FBI added.
“Check your bank’s policies regarding online and app account security. If the phone call seems suspicious, hang up and call the bank back at the customer service number posted on their website.”
A team of cybersecurity researchers has discovered that a large number of mobile apps contain hardcoded secrets allowing others to access private data or block content provided by users.
Hidden behaviors within the app
The study’s findings: that the apps on mobile phones might have hidden or harmful behaviors about which end users know little to nothing, said Zhiqiang Lin, an associate professor of computer science and engineering at The Ohio State University and senior author of the study.
Typically, mobile apps engage with users by processing and responding to user input, Lin said. For instance, users often need to type certain words or sentences, or click buttons and slide screens. Those inputs prompt an app to perform different actions.
For this study, the research team evaluated 150,000 apps. They selected the top 100,000 based on the number of downloads from the Google Play store, the top 20,000 from an alternative market, and 30,000 from pre-installed apps on Android smartphones.
They found that 12,706 of those apps, about 8.5 percent, contained something the research team labeled “backdoor secrets” – hidden behaviors within the app that accept certain types of content to trigger behaviors unknown to regular users.
They also found that some apps have built-in “master passwords,” which allow anyone with that password to access the app and any private data contained within it. And some apps, they found, had secret access keys that could trigger hidden options, including bypassing payment.
“Both users and developers are all at risk if a bad guy has obtained these ‘backdoor secrets,’” Lin said. In fact, he said, motivated attackers could reverse engineer the mobile apps to discover them.
Reverse engineering is a threat
Qingchuan Zhao, a graduate research assistant at Ohio State and lead author of this study, said that developers often wrongly assume reverse engineering of their apps is not a legitimate threat.
“A key reason why mobile apps contain these ‘backdoor secrets’ is because developers misplaced the trust,” Zhao said. To truly secure their apps, he said, developers need to perform security-relevant user-input validations and push their secrets on the backend servers.
The team also found another 4,028 apps – about 2.7 percent – that blocked content containing specific keywords subject to censorship, cyber bullying or discrimination. That apps might limit certain types of content was not surprising – but the way that they did it was: validated locally instead of remotely, Lin said.
“On many platforms, user-generated content may be moderated or filtered before it is published,” he said, noting that several social media sites, including Facebook, Instagram and Tumblr, already limit the content users are permitted to publish on those platforms.
“Unfortunately, there might exist problems – for example, users know that certain words are forbidden from a platform’s policy, but they are unaware of examples of words that are considered as banned words and could result in content being blocked without users’ knowledge,” he said.
“Therefore, end users may wish to clarify vague platform content policies by seeing examples of banned words.”
In addition, he said, researchers studying censorship may wish to understand what terms are considered sensitive. The team developed an open source tool, named InputScope, to help developers understand weaknesses in their apps and to demonstrate that the reverse engineering process can be fully automated.
Hackers are using hidden mobile apps, third-party login and counterfeit gaming videos to target consumers, according to McAfee.
Worldwide detections of LeifAccess, 2019
Last year, hackers targeted consumers with a wide variety of methods, from backdoors to mining cryptocurrencies. Hackers have expanded the ways of hiding their attacks, making them increasingly difficult to identify and remove, which makes it seem like 2020 will be the year of mobile sneak attacks.
Hidden apps: The most active mobile threat
Hidden apps are the most active mobile threat facing consumers, generating nearly 50% of all malicious activities in 2019- a 30% increase from 2018. Hackers continue to target consumers through channels that they spend the most time on- their devices, as the average person globally is expected to own 15 connected devices by 2030.
Hidden apps take advantage of unsuspecting consumers in multiple ways, including taking advantage of consumers using third-party login services or serving unwanted ads.
“Consumers are connected more than ever, and as we look at the current security landscape, as well as future risks, we want to make sure we are doing everything to help consumers protect what matters more to them- their personal data, as well as their family and friends,” said Terry Hicks, Executive Vice President, Consumer Business Group at McAfee.
“Mobile threats are playing a game of hide and steal, and we will continue to empower consumers to safeguard their most valued assets and data.”
Hackers use gaming popularity to spoof consumers
Hackers are taking advantage of the popularity of gaming by distributing their malicious apps via links in popular gamer chat apps and cheat videos by creating their own content containing links to fake apps. These apps masquerade as genuine with icons that closely mimic those of the real apps but serve unwanted ads and collect user data.
Researchers uncovered that popular apps like FaceApp, Spotify, and Call of Duty all have fake versions trying to prey on unsuspecting consumers, especially younger users.
New mobile malware uses third-party sign-on to cheat app ranking systems
Researchers have uncovered new information on mobile malware dubbed LeifAccess, also known as Shopper. This malware takes advantage of the accessibility features in Android to create accounts, download apps, and post reviews using names and emails configured on the victim’s device.
Researchers observed apps based on LeifAccess being distributed via social media, gaming platforms, malvertising, and gamer chat apps. Fake warnings are used to get the user to activate accessibility services, enabling the full range of the malware’s capabilities.
Unique approach to steal sensitive data through legitimate transit app
A series of South Korean transit apps, were compromised with a fake library and plugin that could exfiltrate confidential files, called MalBus. The attack was hidden in a legitimate South Korean transit app by hacking the original developer’s Google Play account.
The series provides a range of information for each region of South Korea, such as bus stop locations, route maps, and schedule times for more than 5 years. MalBus represents a different attack method as hackers went after the account of a legitimate developer of a popular app with a solid reputation.
“There exists a growing trend for many apps to remain hidden, stealing precious resources and important data from the device that acts as the remote control to consumers digital world,” said Raj Samani, McAfee Fellow and Chief Scientist.
“Now, more than ever, it is critical consumers make themselves aware of modern threats and the steps they can take to defend themselves against them, such as staying on legitimate app stores and reading reviews carefully.”
Google has pulled three malicious apps from Google Play, one of which exploits a recently patched kernel privilege escalation bug in Android (CVE-2019-2215) to install the app aimed at spying on users.
The existence of CVE-2019-2215 was discovered in late 2019 when it was spotted being exploited in the wild.
Researchers with Google’s Threat Analysis Group and other external parties believe that the exploit originated with NSO Group, an Israel-based company that specializes in lawful surveillance software and whose Pegasus mobile spyware is abused by oppressive regimes to spy on “enemies”.
At the time, the Android team considered the bug to be of high severity and pointed out that a malicious application has to be installed on the target device to perform the exploit.
About the newly discovered malicious apps
Trend Micro researchers discovered three malicious apps on Google Play:
- Camero – disguised as photo app
- FileCrypt Manager – disguised as a file manager app
- callCam – disguised as a camera calling app.
The first two acted as a dropper for the third one, which would perform the actual spying.
The Camero app would download a DEX file from a C&C, which would then download the callCam APK file and use the CVE-2019-2215 exploit to root the device, install the app and launch it without any user interaction or the user’s knowledge.
“This approach (…) only works on Google Pixel (Pixel 2, Pixel 2 XL), Nokia 3 (TA-1032), LG V20 (LG-H990), Oppo F9 (CPH1881), and Redmi 6A devices,” the researchers noted.
The FileCrypt Manager app would ask users to enable Android Accessibility Services and, if they did, would install and launch the callCam app.
The app callCam hides its icon after being launched, so users wouldn’t notice it.
It collects, encrypts, and sends back to the C&C server information such as:
- Battery status
- Files on device
- Installed app list
- Device information
- Sensor information
- Camera information
- Wifi information
- Data of WeChat, Outlook, Twitter, Yahoo Mail, Facebook, Gmail, and Chrome
Apps used by state-sponsored APT?
State-sponsored hackers occasionally take advantage of Google Play to deliver malicious apps to their targets.
This latest malicious trio has been tied to SideWinder, a threat actor group that has been known to target Pakistani military targets in the past, as they connect to C&C servers that are suspected to be part of SideWinder’s infrastructure.
A patch for CVE-2019-2215 has been provided by Google almost soon after the flaw was first spotted being exploited, but it’s unlikely that it has been disseminated to all Android users out there.
As always, users are advised to be careful about the apps they install on their devices. Google Play may host a much lesser number of malicious apps than a random third-party app marketplace, but the threat, however small, persists.
The banking and financial services sector is struggling with a skills shortage along with the sheer volume of threats and alerts as it continues its ongoing battle against cybercrime, according to Blueliv.
With financial organizations a prime target for attacks, preventing fraud and data leakages is key to the sector’s security strategies – but it is getting harder as cyberthreats become increasingly diverse, sophisticated and malicious.
Rise in banking Trojans
Roughly a third of respondents are concerned about the impact banking Trojans (31 percent) and mobile malware (28 percent) will have on financial services organizations and their customers in 2020.
Tracking the latest evolving threats, researchers observed a 283 percent increase in botnets relating to Trickbot as well as a 130 percent increase in Dridex botnets. These botnets are linked to the distribution of banking Trojans and other malware families targeting the financial services sector.
The report also highlights that malware targeting mobile apps is one of the most rapidly developing threats to the financial services sector, with functionalities that allow criminals to gather user credentials as well as steal funds from mobile users’ bank accounts.
This is partly driven by the fact that cybercriminals can now easily buy malware builders in underground forums, and that these often include advanced evasion techniques so the malware remains undetected on infected devices.
Key security priorities for financial services include fraud prevention
While the financial services sector – by its very nature – has some of the most mature cyberdefense strategies and is ahead of many other industries in detecting and preventing economic crime, weak spots remain in some organizations’ fraud risk assessments. This is underlined by the fact that 35 percent of poll respondents named fraud prevention the most crucial element to an ongoing cybersecurity strategy.
Unauthorized transmission of data from within an organization to external recipients is another key concern, with 31 percent of respondents considering the prevention of data leaks the most important.
Just under a quarter (24 percent) would focus their security strategy around regulation and compliance requirements such as GDPR. In contrast to this, the same number of respondents (25 percent) named regulatory issues as the biggest challenge for financial services institutions developing ongoing security programs.
Visibility of threats is a challenge
According to the poll, financial services organizations encounter a range of issues as they build their security programs – the most pressing being a shortage of skills (28 percent), followed by the high volume of threats and alerts (26 percent) and a lack of visibility into cyberthreats (20 percent).
This is hardly surprising: as financial services institutions (FSIs) embrace digital processes and new customer interaction channels, so their attack surface grows, making it harder to keep on top of threats ranging from Point-of-Sale (PoS) to ATM malware, mobile apps malware to card skimmers.
“Organizations in the financial sector face a constantly changing threat landscape,” commented Daniel Solís, CEO and founder, Blueliv.
“Business priorities have shifted and digital risk management is now central. Because they are such high-value targets for cybercriminal activity, it is imperative that financial services organizations enhance their security priorities, and monitor what is happening both inside and outside their networks in real-time to create effective mitigation strategies before, during and after an attack.”
Solís continued, “FSI security teams can be easily overwhelmed by the number of threat alerts they receive which can very quickly result in alert fatigue and desensitization to real, preventable threats.
“Threat intelligence can address the cyber skills gap through continuous automated monitoring combined with human resource to provide context, helping FSIs develop highly-targeted threat detection, prevention and investigation capabilities.”
A vulnerability in the Google Camera app may have allowed attackers to surreptitiously take pictures and record videos even if the phone is locked or the screen is off, Checkmarx researchers have discovered. In addition to this, attackers would have also been able to eavesdrop on and record phone conversations, silence the camera shutter, transfer captured photos, video and data to their C&C server, and pull GPS location based on photo’s metadata. Android camera spy: … More
The post Android camera apps could be hijacked to spy on users appeared first on Help Net Security.
Mobile apps that work with Bluetooth devices have an inherent design flaw that makes them vulnerable to hacking, a research has found. Where is the issue? The problem lies in the way Bluetooth Low Energy devices communicate with the mobile apps that control them, said Zhiqiang Lin, associate professor of computer science and engineering at The Ohio State University. “There is a fundamental flaw that leaves these devices vulnerable – first when they are initially … More
The post The way Bluetooth devices ‘talk’ to apps leaves them vulnerable appeared first on Help Net Security.