Spin Technology adds new security features to its SpinOne for Google Workspace and Office 365

Spin Technology announced the next generation of SpinOne, an AI-powered ransomware and backup solution for Google Workspace and Office 365. In the last year alone, 51 percent of organizations were targeted by ransomware, and cybersecurity continues to be a top concern for business leaders.

Including advanced new security features, a completely redesigned user interface, and improved platform functionality, the latest version of SpinOne will help organizations better protect against ransomware attacks in the cloud.

Over the last seven months, cloud adoption has accelerated as the number of remote workers spiked dramatically due to the COVID-19 pandemic. This increased reliance on the cloud has resulted in more ransomware attacks on public cloud and SaaS services. In fact, according to a recent report, six in ten successful attacks include data in the public cloud.

SpinOne offers industry-leading ransomware protection for G Suite and Microsoft 365, backup capabilities, and application management.

“As organizations add additional cloud services, they need solutions that are simple to deploy and manage. These updates make it even easier for IT and security professionals to protect their employees from the risks associated with ransomware, all while allowing them to scale the SpinOne platform over time,” said Dmitry Dontov, Chief Executive Officer.

“As G Suite shifts to Google Workspace, SpinOne continues to protect your organization’s data against ransomware and now includes additional summaries that explain the levels of risk and required action. In addition, we’ve enhanced our cloud monitoring capabilities and introduced advanced auditing.”

Comprehensive new security summaries
  • From the dashboard view, an admin can now quickly scan their Google Workspace environment, including what security incidents have occurred to their data.
  • Each data feed is summarized in a widget outlining security incidents, incident history, account summary, and more.
Cloud monitoring
  • Google Workspace has various ongoing activities operating within it, and SpinOne Cloud Monitor now provides a comprehensive overview of all actions, including Data Sharing, Application Installed, and Drive File Deleted.
  • SpinOne now includes six additional cloud monitoring capabilities, detailing the admin activities within the SpinOne platform.
  • The Cloud Monitor Incident Report details actions from users that exceed the rules set by Admins in their policies.
Advanced auditing
  • SpinOne now expands its monitoring of OAuth access, including Android, Native, iOS.
  • Historical risk scoring reviews are now expanded, and organizations can review an add-on’s risk over time.
Enhancements to backup and recovery
  • Users and Groups are now separated in the new SpinOne.

APIs are now available for major third-party applications.

Holiday gifts getting smarter, but creepier when it comes to privacy and security

A Hamilton Beach Smart Coffee Maker that could eavesdrop, an Amazon Halo fitness tracker that measures the tone of your voice, and a robot-building kit that puts your kid’s privacy at risk are among the 37 creepiest holiday gifts of 2020 according to Mozilla.

holiday gifts privacy

Researchers reviewed 136 popular connected gifts available for purchase in the United States across seven categories: toys & games; smart home; entertainment; wearables; health & exercise; pets; and home office.

They combed through privacy policies, pored over product and app features, and quizzed companies in order to answer questions like: Can this product’s camera, microphone, or GPS snoop on me? What data does the device collect and where does it go? What is the company’s known track record for protecting users’ data?”

The guide includes a “Best Of” category, which singles out products that get privacy and security right, while a “Privacy Not Included” warning icon alerts consumers when a product has especially problematic privacy practices.

Meeting minimum security standards

It also identifies which products meet Mozilla’s Minimum Security Standards, such as using encryption and requiring users to change the default password if a password is needed. For the first time, Mozilla also notes which products use AI to make decisions about consumers.

“Holiday gifts are getting ‘smarter’ each year: from watches that collect more and more health data, to drones with GPS, to home security cameras connected to the cloud,” said Ashley Boyd, Mozilla’s Vice President of Advocacy.

“Unfortunately, these gifts are often getting creepier, too. Poor security standards and privacy practices can mean that your connected gift isn’t bringing joy, but rather prying eyes and security vulnerabilities.”

Boyd added: “Privacy Not Included helps consumers prioritize privacy and security when shopping. The guide also keeps companies on their toes, calling out privacy flaws and applauding privacy features.”

What are the products?

37 products were branded with a “Privacy Not Included” warning label including: Amazon Halo, Dyson Pure Cool, Facebook Portal, Hamilton Beach Smart Coffee Maker, Livescribe Smartpens, NordicTrack T Series Treadmills, Oculus Quest 2 VR Sets, Schlage Encode Smart WiFi Deadbolt, Whistle Go Dog Trackers, Ubtech Jimu Robot Kits, Roku Streaming Sticks, and The Mirror

22 products were awarded “Best Of” for exceptional privacy and security practices, including: Apple Homepod, Apple iPad, Apple TV 4K, Apple Watch 6, Apple Air Pods & Air Pods Pro, Arlo Security Cams, Arlo Video Doorbell, Eufy Security Cams, Eufy Video Doorbell, iRobot Roomba i Series, iRobot Roomba s Series, Garmin Forerunner Series, Garmin Venu watch, Garmin Index Smart Scale, Garmin Vivo Series, Jabra Elite Active 85T, Kano Coding Kits, Withings Thermo, Withings Body Smart Scales, Petcube Play 2 & Bites 2, Sonos SL One, and Findster Duo+ GPS pet tracker

A handful of leading brands, like Apple, Garmin, and Eufy, are excelling at improving privacy across their product lines, while other top companies, like Amazon, Huawei, and Roku, are consistently failing to protect consumers.

Apple products don’t share or sell your data. They take special care to make sure your Siri requests aren’t associated with you. And after facing backlash in 2019, Apple doesn’t automatically opt-in users to human voice review.

Eufy Security Cameras are especially trustworthy. Footage is stored locally rather than in the cloud, and is protected by military-grade encryption. Further, Eufy doesn’t sell their customer lists.

Roku is a privacy nightmare. The company tracks just about everything you do — and then shares it widely. Roku shares your personal data with advertisers and other third parties, it targets you with ads, it builds profiles about you, and more.

Amazon’s Halo Fitness Tracker is especially troubling. It’s packed full of sensors and microphones. It uses machine learning to measure the tone, energy, and positivity of your voice. And it asks you to take pictures of yourself in your underwear so it can track your body fat.

Tech companies want a monopoly on your smart products

Big companies like Amazon and Google are offering a family of networked devices, pushing consumers to buy into one company. For instance: Nest users now have to migrate over to a Google-only platform. Google is acquiring Fitbit.

And Amazon recently announced it’s moving into the wearable technology space. These companies realize that the more data they have on people’s lives, the more lucrative their products can be.

Products are getting creepier, even as they get more secure

Many companies — especially big ones like Google and Facebook — are improving security. But that doesn’t mean those products aren’t invasive. Smart speakers, watches, and other devices are reaching farther into our lives, monitoring our homes, bodies, and travel. And often, consumers don’t have insight or control over the data that’s collected.

Connected toys and pet products are particularly creepy. Amazon’s KidKraft Kitchen & Market is made for kids as young as three — but there’s no transparency into what data it collects. Meanwhile, devices like the Dogness iPet Robot put a mobile, internet-connected camera and microphone in your house — without using encryption.

The pandemic is reshaping some data sharing for the better. Products like the Oura Ring and Kinsa smart thermometer can share anonymized data with researchers and scientists to help track public health and coronavirus outbreaks. This is a positive development — data sharing for the public interest, not just profit.

Encryption-based threats grow by 260% in 2020

New Zscaler threat research reveals the emerging techniques and impacted industries behind a 260-percent spike in attacks using encrypted channels to bypass legacy security controls.

encryption-based threats

Showing that cybercriminals will not be dissuaded by a global health crisis, they targeted the healthcare industry the most. Following healthcare, the research revealed the top industries under attack by SSL-based threats were:

1. Healthcare: 1.6 billion (25.5 percent)
2. Finance and Insurance: 1.2 billion (18.3 percent)
3. Manufacturing: 1.1 billion (17.4 percent)
4. Government: 952 million (14.3 percent)
5. Services: 730 million (13.8 percent)

COVID-19 is driving a ransomware surge

Researchers witnessed a 5x increase in ransomware attacks over encrypted traffic beginning in March, when the World Health Organization declared the virus a pandemic. Earlier research from Zscaler indicated a 30,000 percent spike in COVID-related threats, when cybercriminals first began preying on fears of the virus.

Phishing attacks neared 200 million

As one of the most commonly used attacks over SSL, phishing attempts reached more than 193 million instances during the first nine months of 2020. The manufacturing sector was the most targeted (38.6 percent) followed by services (13.8 percent), and healthcare (10.9 percent).

30 percent of SSL-based attacks spoofed trusted cloud providers

Cybercriminals continue to become more sophisticated in avoiding detection, taking advantage of the reputations of trusted cloud providers such as Dropbox, Google, Microsoft, and Amazon to deliver malware over encrypted channels.

Microsoft remains most targeted brand for SSL-based phishing

Since Microsoft technology is among the most adopted in the world, Zscaler identified Microsoft as the most frequently spoofed brand for phishing attacks, which is consistent with ThreatLabZ 2019 report. Other popular brands for spoofing included PayPal and Google. Cybercriminals are also increasingly spoofing Netflix and other streaming entertainment services during the pandemic.

“Cybercriminals are shamelessly attacking critical industries like healthcare, government and finance during the pandemic, and this research shows how risky encrypted traffic can be if not inspected,” said Deepen Desai, CISO and VP of Security Research at Zscaler. “Attackers have significantly advanced the methods they use to deliver ransomware, for example, inside of an organization utilizing encrypted traffic. The report shows a 500 percent increase in ransomware attacks over SSL, and this is just one example to why SSL inspection is so important to an organization’s defense.”

Google fixes two actively exploited Chrome zero-days (CVE-2020-16009, CVE-2020-16010)

For the third time in two weeks, Google has patched Chrome zero-day vulnerabilities that are being actively exploited in the wild: CVE-2020-16009 is present in the desktop version of the browser, CVE-2020-16010 in the mobile (Android) version. About the vulnerabilities (CVE-2020-16009, CVE-2020-16010) As per usual, Google has refrained from sharing much detail about each of the patched vulnerabilities, so all we know is this: CVE-2020-16009 is an inappropriate implementation flaw in V8, Chrome’s open source … More

The post Google fixes two actively exploited Chrome zero-days (CVE-2020-16009, CVE-2020-16010) appeared first on Help Net Security.

Google discloses actively exploited Windows zero-day (CVE-2020-17087)

Google researchers have made public a Windows kernel zero day vulnerability (CVE-2020-17087) that is being exploited in the wild in tandem with a Google Chrome flaw (CVE-2020-15999) that has been patched on October 20.

CVE-2020-17087

About CVE-2020-17087

CVE-2020-17087 is a vulnerability in the Windows Kernel Cryptography Driver, and “constitutes a locally accessible attack surface that can be exploited for privilege escalation (such as sandbox escape).”

More technical information has been provided in the Chromium issue tracker entry, which was kept unaccessible to the wider public for the first seven days, but has now been made public.

The researchers have also included PoC exploit code, which has been tested on Windows 10 1903 (64-bit), but they noted that the affected driver (cng.sys) “looks to have been present since at least Windows 7,” meaning that all the other supported Windows versions are probably vulnerable.

Exploitation and patching

Shane Huntley, Director of Google’s Threat Analysis Group (TAG) confirmed that the vulnerability chain is being used for targeted exploitation and that the attacks are “not related to any US election-related targeting.”

The attackers are using the Chrome bug to gain access to the target system and then the CVE-2020-17087 to gain administrator access on it.

A patch for the issue is expected to be released on November 10, as part of the monthly Patch Tuesday effort by Microsoft.

Currently we expect a patch for this issue to be available on November 10.

While the bug is serious, the fact that it’s being used in targeted (and not widespread) attacks should reassure most users they’ll be safe until the patch is released.

Also, according to a Microsoft spokesperson, exploitation of the flaw has only been spotted in conjuction with the Chrome vulnerability, which has been patched in Chrome and other Chromium-based browsers (e.g., Opera on October 21, Microsoft Edge on October 22.

Users who have implemented those updates are, therefore, safer still.

What is confidential computing? How can you use it?

What is confidential computing? Can it strengthen enterprise security? Sam Lugani, Lead Security PMM, Google Workspace & GCP, answers these and other questions in this Help Net Security interview.

what is confidential computing

How does confidential computing enhance the overall security of a complex enterprise architecture?

We’ve all heard about encryption in-transit and at-rest, but as organizations prepare to move their workloads to the cloud, one of the biggest challenges they face is how to process sensitive data while still keeping it private. However, when data is being processed, there hasn’t been an easy solution to keep it encrypted.

Confidential computing is a breakthrough technology which encrypts data in-use – while it is being processed. It creates a future where private and encrypted services become the cloud standard.

At Google Cloud, we believe this transformational technology will help instill confidence that customer data is not being exposed to cloud providers or susceptible to insider risks.

Confidential computing has moved from research projects into worldwide deployed solutions. What are the prerequisites for delivering confidential computing across both on-prem and cloud environments?

Running workloads confidentially will differ based on what services and tools you use, but one thing is given – organizations don’t want to compromise on usability and performance, at the cost of security.

Those running Google Cloud can seamlessly take advantage of the products in our portfolio, Confidential VMs and Confidential GKE Nodes.

All customer workloads that run in VMs or containers today, can run as a confidential without significant performance impact. The best part is that we have worked hard to simplify the complexity. One checkbox—it’s that simple.

what is confidential computing

What type of investments does confidential computing require? What technologies and techniques are involved?

To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors.

To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology.

Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic.

How is confidential computing helping large organizations with a massive work-from-home movement?

As we entered the first few months of dealing with COVID-19, many organizations expected a slowdown in their digital strategy. Instead, we saw the opposite – most customers accelerated their use of cloud-based services. Today, enterprises have to manage a new normal which includes a distributed workforce and new digital strategies.

With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.

How do you see the work of the Confidential Computing Consortium evolving in the near future?

Google was among the founding members of the Confidential Computing Consortium, operating under the umbrella of the Linux Foundation to facilitate adoption of confidential computing.

Cloud providers, hardware manufacturers, and software vendors all need to work together to define standards to advance confidential computing. As the technology garners more interest, sustained industry collaboration such as the Consortium will be key to helping realize the true potential of confidential computing.

Google Responds to Warrants for “About” Searches

One of the things we learned from the Snowden documents is that the NSA conducts “about” searches. That is, searches based on activities and not identifiers. A normal search would be on a name, or IP address, or phone number. An about search would something like “show me anyone that has used this particular name in a communications,” or “show me anyone who was at this particular location within this time frame.” These searches are legal when conducted for the purpose of foreign surveillance, but the worry about using them domestically is that they are unconstitutionally broad. After all, the only way to know who said a particular name is to know what everyone said, and the only way to know who was at a particular location is to know where everyone was. The very nature of these searches requires mass surveillance.

The FBI does not conduct mass surveillance. But many US corporations do, as a normal part of their business model. And the FBI uses that surveillance infrastructure to conduct its own about searches. Here’s an arson case where the FBI asked Google who searched for a particular street address:

Homeland Security special agent Sylvette Reynoso testified that her team began by asking Google to produce a list of public IP addresses used to google the home of the victim in the run-up to the arson. The Chocolate Factory [Google] complied with the warrant, and gave the investigators the list. As Reynoso put it:

On June 15, 2020, the Honorable Ramon E. Reyes, Jr., United States Magistrate Judge for the Eastern District of New York, authorized a search warrant to Google for users who had searched the address of the Residence close in time to the arson.

The records indicated two IPv6 addresses had been used to search for the address three times: one the day before the SUV was set on fire, and the other two about an hour before the attack. The IPv6 addresses were traced to Verizon Wireless, which told the investigators that the addresses were in use by an account belonging to Williams.

Google’s response is that this is rare:

While word of these sort of requests for the identities of people making specific searches will raise the eyebrows of privacy-conscious users, Google told The Register the warrants are a very rare occurrence, and its team fights overly broad or vague requests.

“We vigorously protect the privacy of our users while supporting the important work of law enforcement,” Google’s director of law enforcement and information security Richard Salgado told us. “We require a warrant and push to narrow the scope of these particular demands when overly broad, including by objecting in court when appropriate.

“These data demands represent less than one per cent of total warrants and a small fraction of the overall legal demands for user data that we currently receive.”

Here’s another example of what seems to be about data leading to a false arrest.

According to the lawsuit, police investigating the murder knew months before they arrested Molina that the location data obtained from Google often showed him in two places at once, and that he was not the only person who drove the Honda registered under his name.

Avondale police knew almost two months before they arrested Molina that another man ­ his stepfather ­ sometimes drove Molina’s white Honda. On October 25, 2018, police obtained records showing that Molina’s Honda had been impounded earlier that year after Molina’s stepfather was caught driving the car without a license.

Data obtained by Avondale police from Google did show that a device logged into Molina’s Google account was in the area at the time of Knight’s murder. Yet on a different date, the location data from Google also showed that Molina was at a retirement community in Scottsdale (where his mother worked) while debit card records showed that Molina had made a purchase at a Walmart across town at the exact same time.

Molina’s attorneys argue that this and other instances like it should have made it clear to Avondale police that Google’s account-location data is not always reliable in determining the actual location of a person.

“About” searches might be rare, but that doesn’t make them a good idea. We have knowingly and willingly built the architecture of a police state, just so companies can show us ads. (And it is increasingly apparent that the advertising-supported Internet is heading for a crash.)

Windstream Enterprise adds Google Assistant and Amazon Alexa to its SD-WAN solution

Windstream Enterprise (WE) has added new Google Assistant and Amazon Alexa voice command features to its SD-WAN solution, enabling network administrators to work more efficiently.

WE already includes Google Assistant and Amazon Alexa integration in its award-winning OfficeSuite UC® solution. This integration with SD-WAN marks the second major voice command innovation, and further demonstrates Windstream Enterprise’s commitment to helping customers streamline their daily activities and simplify their work loads.

SD-WAN customers can now get a pulse on their SD-WAN environment with the following features:

  • SD-WAN daily summary: Provides site status, including the total number of disconnected, connected and impaired sites, as well as sites pending activation.
  • Ticket summary: Presents a high-level readout of total open and recently updated tickets.
  • Ticket activity: Delivers a more granular look at open tickets, including the ID number, opened date, trouble type and location for which the ticket was created.

Through a simple log in via WE Connect, Windstream Enterprise’s easy-to-use network management portal, customers gain the convenience of digital voice assistants across both SD-WAN and unified communications to stay apprised of their tasks, network health and workload.

With a simple voice command, customers will be able to say things like: “Ok Google, Ask Windstream to get my SD-WAN overview,” or “Alexa, Ask Windstream to get my ticket summary.” Many more voice commands are available for both SD-WAN and OfficeSuite UC functions.

“Digital voice assistants are becoming essential in everyday life, and use is increasing as work-from-home and hybrid working environments take hold; therefore, Windstream Enterprise is giving customers the same seamless, innovative and high-tech experience with their unified communications and SD-WAN management,” said Mike Frane, vice president of product management at Windstream Enterprise.

“Our philosophy is technology should make our customers’ lives easier and more efficient. We’re delivering on that goal by meeting our customers where they want to interact with us—on their portal, mobile device and now their digital assistant.”

Chrome 86 delivers more security features for mobile users

Google has released Chrome 86 for desktop and mobile, which comes with several new and improved security features for mobile users, including:

  • New password protections
  • Enhanced Safe Browsing
  • Easier password filling
  • Mixed form warnings and mixed downloads warnings/blocks

New password security features in Chrome 86

The Password Checkup feature came first in the form of a Chrome extension, then was built into Google Account’s password manager and Chrome, and now it has been enhanced with support for the “.well-known/change-password” standard – a W3C specification that defines a well-known URL that sites can use to make their change password forms discoverable by tools (e.g. Chrome, or the latest version of Safari)

Chrome 86 security

This change means that, after they’ve been alerted that their password has been compromised, Chrome will take users directly to the right “change password” form. Hopefully, this will spur more users to act upon the alert.

Enhanced Safe Browsing is added to Chrome for Android

Enhanced Safe Browsing mode, which was first introduced in Chrome 83 (for desktop versions), allows users to get a more personalized protection against malicious sites.

“When you turn on Enhanced Safe Browsing, Chrome can proactively protect you against phishing, malware, and other dangerous sites by sharing real-time data with Google’s Safe Browsing service. Among our users who have enabled checking websites and downloads in real time, our predictive phishing protections see a roughly 20% drop in users typing their passwords into phishing sites,” noted AbdelKarim Mardini, Senior Product Manager, Chrome.

In addition to this, Safety Check – an option that allows users to scan their Chrome installation to check whether the browser is up to date, whether the Safe Browsing service is enabled, and whether any of the passwords the user uses have been compromised in a known breach – is now available to Chrome for Android and iOS.

Biometric authentication for autofilling of passwords on iOS

iOS users can finally take advantage of the convenient password autofill option that was made available a few months ago to Android users.

The option allows iOS users to authenticate using Face ID, Touch ID, or their phone passcode before their saved passwords are automatically filled into sites and iOS apps (the Chrome autofill option must be turned on in Settings).

Chrome 86 security

Mixed form/download warnings

Mixed content, i.e., insecure content served from otherwise secure (HTTPS) pages, is a danger to users.

Chrome 86 will warn users when they are about to submit information through a non-secure form embedded in an HTTPS page and when they are about to initiate insecure downloads over non-secure links.

For the moment, Chrome will block the download of executables and archive files over non-secure links but show a warning if the user tries to download documents files, PDFs, and multimatedia files. The next few Chrome versions will block those as well.

Last but not least, Google has fixed 35 security issues in Chrome 86, including a critical use after free vulnerabilities in payments (CVE-2020-15967).

Google aims to improve security of browser engines, third-party Android devices and apps on Google Play

Google has announced two new security initiatives: one is aimed at helping bug hunters improve the security of various browsers’ JavaScript engines, the other at helping Android OEMs improve the security of the mobile devices they ship.

Google new security initiatives

Fuzzing JavaScript engines

“JavaScript engine security continues to be critical for user safety, as demonstrated by recent in-the-wild zero-day exploits abusing vulnerabilities in v8, the JavaScript engine behind Chrome. Unfortunately, fuzzing JavaScript engines to uncover these vulnerabilities is generally quite expensive due to their high complexity and relatively slow processing of input,” noted Project Zero’s Samuel Groß.

Researchers must also bear the costs of fuzzing in advance, even though there’s a possibility their approach may not discover any bugs or if it does, that they’ll receive a reward for finding them. This fact might deter many of them and, consequently, bugs stay unfixed and exploitable for longer.

That’s why Google is offering $5,000 research grants in the form of Google Compute Engine credits.

Interested researchers must submit a proposal with details about their intended approach and the awarded credits must be used for fuzzing JavaScript engines with the approach described in the proposal.

They can fuzz the JavaScriptCore (Safari), v8 (Chrome, Edge), or Spidermonkey (Firefox), and must report the found vulnerabilities to the affected vendor. They must also publicly report on their findings within 6 months of the grant getting awarded.

Helping third parties in the Android ecosystem

The company is also set on improving the security of the Android ecosystem, and to that point it’s launching the Android Partner Vulnerability Initiative (APVI).

“Until recently, we didn’t have a clear way to process Google-discovered security issues outside of AOSP (Android Open Source Project) code that are unique to a much smaller set of specific Android OEMs,” the company explained.

“The APVI […] covers a wide range of issues impacting device code that is not serviced or maintained by Google (these are handled by the Android Security Bulletins).”

Already discovered issues and those yet to be unearthed have been/will be shared through this bug tracker.

Simultaneously, the company has is looking for a Security Engineering Manager in Android Security that will, among other things, lead a team that “will perform application security assessments against highly sensitive, third party Android apps on Google Play, working to identify vulnerabilities and provide remediation guidance to impacted application developers.”

Google offers high-risk Chrome users additional scanning of risky files

Google is providing a new “risky files” scanning feature to Chrome users enrolled in its Advanced Protection Program (APP).

Chrome scanning risky files

About the Advanced Protection Program

Google introduced the Advanced Protection Program in 2017.

It’s primarily aimed at users whose accounts are at high risk of compromise through targeted attacks – journalists, human rights and civil society activists, campaign staffers and people in abusive relationships, executives and specific employees – but anyone can sign up for it.

It offers:

  • Anti-phishing protection, as attackers can steal users’ credentials, but they need the security key/smartphone that’s in the user’s possession to gain access to the account
  • Extra protection from harmful downloads
  • Protection from malicious third-party apps that may want to access users’ Google Account.

Some features, like the one announced on Wednesday, will work only if the user uses Google Chrome and is signed into it with their Advanced Protection Program identity.

Additional scanning

Chrome started warning APP users when a downloaded file may be malicious last year, but now it will also give them the ability to send risky files for additional scanning by Google Safe Browsing’s full suite of malware detection technology before opening them.

“When a user downloads a file, Safe Browsing will perform a quick check using metadata, such as hashes of the file, to evaluate whether it appears potentially suspicious. For any downloads that Safe Browsing deems risky, but not clearly unsafe, the user will be presented with a warning and the ability to send the file to be scanned,” Chrome engineers explained.

“If the user chooses to send the file, Chrome will upload it to Google Safe Browsing, which will scan it using its static and dynamic analysis techniques in real time. After a short wait, if Safe Browsing determines the file is unsafe, Chrome will warn the user. As always, users can bypass the warning and open the file without scanning, if they are confident the file is safe. Safe Browsing deletes uploaded files a short time after scanning.”

Aside from helping users, the new feature is expected to help Google improve their ability to detect malicious files.

Global public cloud services market grew 26% YOY in 2019 with revenues totaling $233.4 billion

The worldwide public cloud services market, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS), grew 26% year over year in 2019 with revenues totaling $233.4 billion, according to IDC.

public cloud services market 2019

Spending continued to consolidate in 2019 with the combined revenue of the top 5 public cloud service providers (Amazon Web Services, Microsoft, Salesforce.com, Google, and Oracle) capturing more than one third of the worldwide total and growing 36% year over year.

“Cloud is expanding far beyond niche e-commerce and online ad-sponsored searches. It underpins all the digital activities that individuals and enterprises depend upon as we navigate and move beyond the pandemic,” said Rick Villars, group vice president, Worldwide Research at IDC.

“Enterprises talked about cloud journeys of up to ten years. Now they are looking to complete the shift in less than half that time.”

Public cloud services market has doubled since 2016

The public cloud services market has doubled in the three years since 2016. During this same period, the combined spending on IaaS and PaaS has nearly tripled. This highlights the increasing reliance on cloud infrastructure and platforms for application deployment for enterprise IT internal applications as well as SaaS and digital application delivery.

Spending on IaaS and PaaS is expected to continue growing at a higher rate than the overall cloud market over the next several years as resilience, flexibility, and agility guide IT platform decisions.

“Today’s economic uncertainty draws fresh attention to the core benefits of IaaS – low financial commitment, flexibility to support business agility, and operational resilience,” said Deepak Mohan, research director, Cloud Infrastructure Services.

“Cost optimization and business resilience have emerged as top drivers of IT investment decisions and IaaS offerings are designed to enable both. The COVID-19 disruption has accelerated cloud adoption with both traditional enterprise IT organizations and digital service providers increasing use of IaaS for their technology platforms.”

“Digitizing processes is being prioritized by enterprises in every industry segment and that is accelerating the demand for new applications as well as repurposing existing applications,” said Larry Carvalho, research director, Platform as a Service.

“Modern application platforms powered by containers and the serverless approach are providing the necessary tools for developers in meeting these needs. The growth in PaaS revenue reflects the need by enterprises for tools to accelerate and automate the development lifecycle.”

“SaaS applications remains the largest segment of public cloud spending with revenues of more than $122 billion in 2019. Although growth has slowed somewhat in recent years, the current crisis serves as an accelerator for SaaS adoption across primary and functional markets to address the exponential growth of remote workers,” said Frank Della Rosa, research director, SaaS and Cloud Software.

The combined IaaS and PaaS market

A combined view of IaaS and PaaS spending is relevant because it represents how end customers consume these services when deploying applications on public cloud. In the combined IaaS and PaaS market, Amazon Web Services and Microsoft captured more than half of global revenues.

But there continues to be a healthy long tail, representing over a third of the market. These are typically companies with targeted use case-specific PaaS offerings. The long tail is even more pronounced in SaaS, where nearly three quarters of the spending is captured outside the top 5.

Chrome 86 will prominently warn about insecure forms on secure pages

Entering information into and submitting it through insecure online forms will come with very explicit warnings in the upcoming Chrome 86, Google has announced.

The new alerts

The browser will show a warning when a user begins filling out a mixed form (a form on a HTTPS site that does not submit through an HTTPS channel) and when a user tries to submit a mixed form.

Chrome insecure forms

“Before M86, mixed forms were only marked by removing the lock icon from the address bar. We saw that users found this experience unclear and it did not effectively communicate the risks associated with submitting data in insecure forms,” Shweta Panditrao, a software engineer with the Chrome Security Team, explained.

The last warning will be especially impossible to miss, as it will be shown on a full page:

Chrome insecure forms

The submission of the info will be temporarily blocked and it’s on users to decide if they want to risk it and override the block to submit the form anyway.

Google is also planning to disable the autofill feature of the browser’s password manager on all mixed forms except login forms (forms that require users to enter their username and password).

“Chrome’s password manager helps users input unique passwords, and it is safer to use unique passwords even on forms that are submitted insecurely, than to reuse passwords,” Panditrao explained the rationale for that exception.

Simultaneously, Google encouraged developers to fully migrate forms on their site to HTTPS to protect their users.

Google’s push towards HTTPS and blocking mixed content

For many years, Google has been working on making HTTPS the standard for any and every online action.

In 2014, the company started prioritizing websites using HTTPS in Google Search results.

In 2017, Chrome started labeling sites that transmit passwords or credit cards information over HTTP as “Not secure”. Later that same year, Chrome started showing the same alert for resources delivered over the FTP protocol.

Then, in 2018, Chrome began explicitly marking all HTTP sites as “not secure”.

In 2019, Google published roadmap for Chrome’s gradual but inexorable push towards blocking mixed content (insecure HTTP subresources – images, audio, and video – loading on HTTPS pages).

Earlier this year, it did the same for mixed content downloads, and effort that is supposed to be finalized in Chrome 86, which is slated to be released in October 2020.

Users turn to independent search engines for privacy, but also get misinformation

Anti-vaccine websites, which could play a key role in promoting public hesitancy about a potential COVID-19 vaccine, are far more likely to be found via independent search engines than through an internet giant like Google. Misinformed while looking for privacy The study, led by researchers at Brighton and Sussex Medical School (BSMS), showed that independent search engines returned between 3 and 16 anti-vaccine websites in the first 30 results, while Google.com returned none. Lead author … More

The post Users turn to independent search engines for privacy, but also get misinformation appeared first on Help Net Security.

Leading tech companies certify IoT devices via ioXt Alliance

The ioXt Alliance announced that major technology companies and manufacturers including Google, T-Mobile, Silicon Labs and more, certified a wide range of devices through the ioXt Alliance Certification Program.

Devices certified secure by the ioXt Alliance include cell phones, smart home, lighting controls, IoT Bluetooth, smart retail, portable medical, pet trackers, routers and automotive technology.

The ioXt Alliance is backed by the biggest names in tech and is the only organization positioned to handle the rapidly increasing demand for IoT device certifications that meet security requirements across every product category.

With major manufacturers and tech disruptors on their board, membership growing and four Authorized Labs as exclusive test providers, the ioXt Alliance continues to pave the way in defining industry-led global security standards that can be tested at scale.

“While consumers have long called for better device security and privacy protections, we understand that retailers are now putting tremendous pressure on consumer tech to ensure the IoT products they put on their shelves are secure,” said Brad Ree, CTO of the ioXt Alliance.

“With significant revenue on the line, companies are recognizing the need to provide transparency and assurance to those using or selling their products. We are proud to be the organization that Google, T-Mobile and other big players in the industry are increasingly relying on to thoroughly test and certify products as secure, no matter the type of device.”

“Transparency about the security ‘ingredients’ in connected devices acts as a tide to raise all boats, helping users make better decisions and the world realize the potential of the Internet of Things,” said Dave Kleidermacher, Google VP of Engineering, Android Security and Privacy.

“The over 200 members of ioXt have built a comprehensive, scalable security compliance program to realize that vision.”

Focused on security, upgradability and transparency, the ioXt Certification Program evaluates a device against each of the eight ioXt pledge principles with clear guidelines for quantifying the appropriate level of security needed for a specific device within a product category.

Evaluations against the ioXt Pledge are done via manufacturer attestation or through the ioXt Alliance Authorized Labs which include Bureau Veritas – 7layers, DEKRA and NCC Group.

Each have a deep history in compliance and security testing expertise at a global scale, are well-versed in the definition of the ioXt Alliance security standards and provide the third-party validation of device test results that ensure all devices are cybersafe.

Devices then receive the ioXt SmartCert after meeting or exceeding the requirements in its designated product category.

Darren Kress, Sr. Director Telecom Security, says, “The ioXt Alliance is bringing effective and appropriate IoT security enhancements to market while avoiding excessive cost and complexity. This structure will encourage the development of trustworthy IoT devices and services enhancing the security of T-Mobile’s Customers and the Wireless ecosystem.”

Devices certified by the ioXt Alliance

Smart home

  • DSR Corporation Flyfish Gateway
  • T-Mobile Home Internet Gateway
  • T-Mobile SyncUP Pets

Smart building

  • Acuity Brands nLight ECLYPSE Lighting Controller
  • LEEDARSON Tunable White Bulb

Connected automotive

  • T-Mobile SyncUP Drive

Bluetooth connectivity

  • Silicon Labs xG22 Thunderboard (Smart Home, Smart Retail, Portable Medical)

Cellular / mobile

  • Google Pixel 4
  • Google Pixel 4a
  • Google Pixel XL

“IoT products are working their way into every aspect of our lives as consumers and in business which offers those with malicious intent a vector of which to prey and security is not optional,” said Mike Dow, Senior Product Manager of IoT Security at Silicon Labs and ioXt Alliance board member.

“The ioXt Alliance Certification Program is not locked into ideas of the past and understands that every type of device needs its own security profile to define the right level of certification and certification can be effectively scaled.”

Chinese-made drone app in Google Play spooks security researchers

A DJI Phantom 4 quadcopter drone.

Enlarge / A DJI Phantom 4 quadcopter drone.

The Android version of DJI Go 4—an app that lets users control drones—has until recently been covertly collecting sensitive user data and can download and execute code of the developers’ choice, researchers said in two reports that question the security and trustworthiness of a program with more than 1 million Google Play downloads.

The app is used to control and collect near real-time video and flight data from drones made by China-based DJI, the world’s biggest maker of commercial drones. The Play Store shows that it has more than 1 million downloads, but because of the way Google discloses numbers, the true number could be as high as 5 million. The app has a rating of three-and-a-half stars out of a possible total of five from more than 52,000 users.

Wide array of sensitive user data

Two weeks ago, security firm Synactive reverse-engineered the app. On Thursday, fellow security firm Grimm published the results of its own independent analysis. At a minimum, both found that the app skirted Google terms and that, until recently, the app covertly collected a wide array of sensitive user data and sent it to servers located in mainland China. A worst-case scenario is that developers are abusing hard-to-identify features to spy on users.

According to the reports, the suspicious behaviors include:

  • The ability to download and install any application of the developers’ choice through either a self-update feature or a dedicated installer in a software development kit provided by China-based social media platform Weibo. Both features could download code outside of Play, in violation of Google’s terms.
  • A recently removed component that collected a wealth of phone data including IMEI, IMSI, carrier name, SIM serial Number, SD card information, OS language, kernel version, screen size and brightness, wireless network name, address and MAC, and Bluetooth addresses. These details and more were sent to MobTech, maker of a software developer kit used until the most recent release of the app.
  • Automatic restarts whenever a user swiped the app to close it. The restarts cause the app to run in the background and continue to make network requests.
  • Advanced obfuscation techniques that make third-party analysis of the app time-consuming.

This month’s reports come three years after the US Army banned the use of DJI drones for reasons that remain classified. In January, the Interior Department grounded drones from DJI and other Chinese manufacturers out of concerns data could be sent back to the mainland.

DJI officials said the researchers found “hypothetical vulnerabilities” and that neither report provided any evidence that they were ever exploited.

“The app update function described in these reports serves the very important safety goal of mitigating the use of hacked apps that seek to override our geofencing or altitude limitation features,” they wrote in a statement. Geofencing is a virtual barrier that the Federal Aviation Administration or other authorities bar drones from crossing. Drones use GPS, Bluetooth, and other technologies to enforce the restrictions.

A Google spokesman said the company is looking into the reports. The researchers said the iOS version of the app contained no obfuscation or update mechanisms.

Obfuscated, acquisitive, and always on

In several respects, the researchers said, DJI Go 4 for Android mimicked the behavior of botnets and malware. Both the self-update and auto-install components, for instance, call a developer-designated server and await commands to download and install code or apps. The obfuscation techniques closely resembled those used by malware to prevent researchers from discovering its true purpose. Other similarities were an always-on status and the collection of sensitive data that wasn’t relevant or necessary for the stated purpose of flying drones.

Making the behavior more concerning is the breadth of permissions required to use the app, which include access to contacts, microphone, camera, location, storage, and the ability to change network connectivity. Such sprawling permissions meant that the servers of DJI or Weibo, both located in a country known for its government-sponsored espionage hacking, had almost full control over users’ devices, the researchers said.

Both research teams said they saw no evidence the app installer was ever actually used, but they did see the automatic update mechanism trigger and download a new version from the DJI server and install it. The download URLs for both features are dynamically generated, meaning they are provided by a remote server and can be changed at any time.

The researchers from both firms conducted experiments that showed how both mechanisms could be used to install arbitrary apps. While the programs were delivered automatically, the researchers still had to click their approval before the programs could be installed.

Both research reports stopped short of saying the app actually targeted individuals, and both noted that the collection of IMSIs and other data had ended with the release of current version 4.3.36. The teams, however, didn’t rule out the possibility of nefarious uses. Grimm researchers wrote:

In the best case scenario, these features are only used to install legitimate versions of applications that may be of interest to the user, such as suggesting additional DJI or Weibo applications. In this case, the much more common technique is to display the additional application in the Google Play Store app by linking to it from within your application. Then, if the user chooses to, they can install the application directly from the Google Play Store. Similarly, the self-updating components may only be used to provide users with the most up-to-date version of the application. However, this can be more easily accomplished through the Google Play Store.

In the worst case, these features can be used to target specific users with malicious updates or applications that could be used to exploit the user’s phone. Given the amount of user’s information retrieved from their device, DJI or Weibo would easily be able to identify specific targets of interest. The next step in exploiting these targets would be to suggest a new application (via the Weibo SDK) or update the DJI application with a customized version built specifically to exploit their device. Once their device has been exploited, it could be used to gather additional information from the phone, track the user via the phone’s various sensors, or be used as a springboard to attack other devices on the phone’s WiFi network. This targeting system would allow an attacker to be much stealthier with their exploitation, rather than much noisier techniques, such as exploiting all devices visiting a website.

DJI responds

DJI officials have published an exhaustive and vigorous response that said that all the features and components detailed in the reports either served legitimate purposes or were unilaterally removed and weren’t used maliciously.

“We design our systems so DJI customers have full control over how or whether to share their photos, videos and flight logs, and we support the creation of industry standards for drone data security that will provide protection and confidence for all drone users,” the statement said. It provided the following point-by-point discussion:

  • When our systems detect that a DJI app is not the official version – for example, if it has been modified to remove critical flight safety features like geofencing or altitude restrictions – we notify the user and require them to download the most recent official version of the app from our website. In future versions, users will also be able to download the official version from Google Play if it is available in their country. If users do not consent to doing so, their unauthorized (hacked) version of the app will be disabled for safety reasons.
  • Unauthorized modifications to DJI control apps have raised concerns in the past, and this technique is designed to help ensure that our comprehensive airspace safety measures are applied consistently.
  • Because our recreational customers often want to share their photos and videos with friends and family on social media, DJI integrates our consumer apps with the leading social media sites via their native SDKs. We must direct questions about the security of these SDKs to their respective social media services. However, please note that the SDK is only used when our users proactively turn it on.
  • DJI GO 4 is not able to restart itself without input from the user, and we are investigating why these researchers claim it did so. We have not been able to replicate this behavior in our tests so far.
  • The hypothetical vulnerabilities outlined in these reports are best characterized as potential bugs, which we have proactively tried to identify through our Bug Bounty Program, where security researchers responsibly disclose security issues they discover in exchange for payments of up to $30,000. Since all DJI flight control apps are designed to work in any country, we have been able to improve our software thanks to contributions from researchers all over the world, as seen on this list.
  • The MobTech and Bugly components identified in these reports were previously removed from DJI flight control apps after earlier researchers identified potential security flaws in them. Again, there is no evidence they were ever exploited, and they were not used in DJI’s flight control systems for government and professional customers.
  • The DJI GO4 app is primarily used to control our recreational drone products. DJI’s drone products designed for government agencies do not transmit data to DJI and are compatible only with a non-commercially available version of the DJI Pilot app. The software for these drones is only updated via an offline process, meaning this report is irrelevant to drones intended for sensitive government use. A recent security report from Booz Allen Hamilton audited these systems and found no evidence that the data or information collected by these drones is being transmitted to DJI, China, or any other unexpected party.
  • This is only the latest independent validation of the security of DJI products following reviews by the U.S. National Oceanic and Atmospheric Administration, U.S. cybersecurity firm Kivu Consulting, the U.S. Department of Interior and the U.S. Department of Homeland Security.
  • DJI has long called for the creation of industry standards for drone data security, a process which we hope will continue to provide appropriate protections for drone users with security concerns. If this type of feature, intended to assure safety, is a concern, it should be addressed in objective standards that can be specified by customers. DJI is committed to protecting drone user data, which is why we design our systems so drone users have control of whether they share any data with us. We also are committed to safety, trying to contribute technology solutions to keep the airspace safe.

Don’t forget the Android app mess

The research and DJI’s response underscore the disarray of Google’s current app procurement system. Ineffective vetting, the lack of permission granularity in older versions of Android, and the openness of the operating system make it easy to publish malicious apps in the Play Store. Those same things also make it easy to mistake legitimate functions for malicious ones.

People who have DJI Go 4 for Android installed may want to remove it at least until Google announces the results of its investigation (the reported automatic restart behavior means it’s not sufficient to simply curtail use of the app for the time being). Ultimately, users of the app find themselves in a similar position as that of TikTok, which has also aroused suspicions, both because of some behavior considered sketchy by some and because of its ownership by China-based ByteDance.

There’s little doubt that plenty of Android apps with no ties to China commit similar or worse infractions than those attributed to DJI Go 4 and TikTok. People who want to err on the side of security should steer clear of a large majority of them.

How secure is your web browser?

NSS Labs released the results of its web browser security test after testing Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera, for phishing protection and malware protection.

web browser security

Key takeaways

  • Phishing protection rates ranged from 79.2% to 95.5%
  • For malware, the highest block rate was 98.5% and the lowest block rate was 5.6%
  • Protection improved over time; the most consistent products provided the best protection against phishing and malware.

Email, instant messages, SMS messages and links on social networking sites are used by criminals to lure victims to download and install malware disguised as legitimate software (a.k.a. socially engineered malware). Once the malware is installed, victims are subjected to identity theft, bank account compromise, and other devastating consequences.

Those same techniques are also used for phishing attacks, where victims are lured to websites impersonating banking, social media, charity, payroll, and other legitimate websites; victims are then tricked into providing passwords, credit card and bank account numbers, and other private information.

In addition, landing pages (URLs) from phishing websites are another way attackers exploit victim’s computers and silently install malicious software.

Protecting against malware and phishing

The ability to warn potential victims that they are about to stray onto a malicious website puts web browsers in a unique position to combat phishing, malware, and other criminal attacks.

To protect against malware and phishing attacks, browsers use cloud-based reputation systems that scour the internet for malicious websites and then categorize content accordingly, either by adding it to blocklists or whitelists, or by assigning it a score.

“As a result of the COVID-19 pandemic, employees have been forced to work from home and now have unprecedented remote access to corporate resources. Threat actors are shifting tactics to target these remote employees who may not benefit from corporate protection. This makes the protection offered by web browsers more important than ever,” said Vikram Phatak, founder of NSS Labs.

Tested browsers

  • Google Chrome – version 81.0.4044.113 – 81.0.4044.138
  • Microsoft Edge – version 83.0.478.10 – 84.0.516.1
  • Mozilla Firefox – version 75.0 – 76.0.1
  • Opera – version 67.0.3575.137 – 68.0.3618.125

New technique keeps your online photos safe from facial recognition algorithms

In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the internet.

photos online algorithms

An AI algorithm will identify a cat in the picture on the left but will not detect a cat in the picture on the right

Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one’s photos together via the people present in those photos using Google’s own image recognition technology.

Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines.

Safeguarding sensitive information in photos

Led by Professor Mohan Kankanhalli, Dean of the School of Computing at the National University of Singapore (NUS), the research team from the School’s Department of Computer Science has developed a technique that safeguards sensitive information in photos by making subtle changes that are almost imperceptible to humans but render selected features undetectable by known algorithms.

Visual distortion using currently available technologies will ruin the aesthetics of the photograph as the image needs to be heavily altered to fool the machines. To overcome this limitation, the research team developed a “human sensitivity map” that quantifies how humans react to visual distortion in different parts of an image across a wide variety of scenes.

The development process started with a study involving 234 participants and a set of 860 images. Participants were shown two copies of the same image and they had to pick out the copy that was visually distorted.

After analysing the results, the research team discovered that human sensitivity is influenced by multiple factors. These factors included things like illumination, texture, object sentiment and semantics.

Applying visual distortion with minimal disruption

Using this “human sensitivity map” the team fine-tuned their technique to apply visual distortion with minimal disruption to the image aesthetics by injecting them into areas with low human sensitivity.

The NUS team took six months of research to develop this novel technique.

“It is too late to stop people from posting photos on social media in the interest of digital privacy. However, the reliance on AI is something we can target as the threat from human stalkers pales in comparison to the might of machines. Our solution enables the best of both worlds as users can still post their photos online safe from the prying eye of an algorithm,” said Prof Kankanhalli.

End users can use this technology to help mask vital attributes on their photos before posting them online and there is also the possibility of social media platforms integrating this into their system by default. This will introduce an additional layer of privacy protection and peace of mind.

The team also plans to extend this technology to videos, which is another prominent type of media frequently shared on social media platforms.

New technique protects consumers from voice spoofing attacks

Researchers from CSIRO’s Data61 have developed a new technique to protect consumers from voice spoofing attacks.

voice spoofing attacks

Fraudsters can record a person’s voice for voice assistants like Amazon Alexa or Google Assistant and replay it to impersonate that individual. They can also stitch samples together to mimic a person’s voice in order to spoof, or trick third parties.

Detecting when hackers are attempting to spoof a system

The new solution, called Void (Voice liveness detection), can be embedded in a smartphone or voice assistant software and works by identifying the differences in spectral power between a live human voice and a voice replayed through a speaker, in order to detect when hackers are attempting to spoof a system.

Consumers use voice assistants to shop online, make phone calls, send messages, control smart home appliances and access banking services.

Muhammad Ejaz Ahmed, Cybersecurity Research Scientist at CSIRO’s Data61, said privacy preserving technologies are becoming increasingly important in enhancing consumer privacy and security as voice technologies become part of daily life.

“Voice spoofing attacks can be used to make purchases using a victim’s credit card details, control Internet of Things connected devices like smart appliances and give hackers unsolicited access to personal consumer data such as financial information, home addresses and more,” Mr Ahmed said.

“Although voice spoofing is known as one of the easiest attacks to perform as it simply involves a recording of the victim’s voice, it is incredibly difficult to detect because the recorded voice has similar characteristics to the victim’s live voice. Void is game-changing technology that allows for more efficient and accurate detection helping to prevent people’s voice commands from being misused”.

Relying on insights from spectrograms

Unlike existing voice spoofing techniques which typically use deep learning models, Void was designed relying on insights from spectrograms — a visual representation of the spectrum of frequencies of a signal as it varies with time to detect the ‘liveness’ of a voice.

This technique provides a highly accurate outcome, detecting attacks eight times faster than deep learning methods, and uses 153 times less memory, making it a viable and lightweight solution that could be incorporated into smart devices.

Void has been tested using datasets from Samsung and Automatic Speaker Verification Spoofing and Countermeasures challenges, achieving an accuracy of 99 per cent and 94 per cent for each dataset.

Research estimates that by 2023, as many as 275 million voice assistant devices will be used to control homes across the globe — a growth of 1000 percent since 2018.

How to protect data when using voice assistants

Dr Adnene Guabtni, Senior Research Scientist at CSIRO‘s Data61, shares tips for consumers on how to protect their data when using voice assistants:

  • Always change your voice assistant settings to only activate the assistant using a physical action, such as pressing a button.
  • On mobile devices, make sure the voice assistant can only activate when the device is unlocked.
  • Turn off all home voice assistants before you leave your house, to reduce the risk of successful voice spoofing while you are out of the house.
  • Voice spoofing requires hackers to get samples of your voice. Make sure you regularly delete any voice data that Google, Apple or Amazon store.
  • Try to limit the use of voice assistants to commands that do not involve online purchases or authorizations – hackers or people around you might record you issuing payment commands and replay them at a later stage.

Things to keep in mind when downloading apps from G Suite Marketplace

Security researchers have tested nearly 1,000 enterprise apps offered on Google’s G Suite Marketplace and discovered that many ask for permission to access to user data via Google APIs as well as to communicate with (sometimes undisclosed) external services.

apps G Suite Marketplace

“The request to ‘Connect to an external service’ is notable, as it indicates apps can communicate with other online APIs that neither Google nor the app developer might not control,” they pointed out.

They also noted that the app authorization prompt only discloses if an app can connect to external services, but does not name these external services not does it explain what the app is using those APIs.

“While some developers do elaborate on this in their apps’ Marketplace listings or external privacy policies, a cursory spot check on a selection of these 481 apps shows this is not always the case,” they added, meaning that users often don’t know what other services might receive their private user information.

About the G Suite Marketplace

The G Suite Marketplace is an online “app store” from which enterprise applications that are integrated with G Suite can be added to an entire domain or to individual G Suite accounts.

Applications installed from the Marketplace can be launched directly from within the various G Suite products (Gmail, Drive, Docs, etc.).

Both end users and G Suite administrators can discover and install new apps from the Marketplace. The latter can find, install, and authorize apps for some or all of their users via the G Suite Admin console.

App verification

“In order to curb potential abuse of users’ private data, Google’s policy requires app developers to submit their products for review if they call API functions that ‘allow access to Google User Data’. This review takes 3 to 5 days for apps that use ‘sensitive’ API calls, or 4 to 8 weeks for apps that use the subset of ‘restricted’ API calls specifically concerning Gmail or Google Drive data,” the researchers additionally noted.

So, this unverified status can last for a while and, in the meantime, Google ostensibly prevents more than a 100 users from installing the app (while warning them to install it only if they know and trust the developer).

This use limit is also not strictly enforced, researchers Irwin Reyes and Michael Lack with Two Six Labs found.

“One of these still-unverified apps drew our attention in particular: ezShared Contacts. This app gained over 1,000 users between the two times when we scraped the Marketplace. Among its disclosed authorizations are ‘Read, compose, send, and permanently delete all your email from Gmail,’ ‘See, edit, download, and permanently delete your contacts,’ and ‘Connect to an external service.’”

apps G Suite Marketplace

Although Google performs the aforementioned review and although it offers app developers the option to receive a trust-inducing badge on their G Suite Marketplace listing once they passed a security assessment by a third-party security firm (see image above), users should keep in mind that Google does not accept responsibility for any compromise or loss of data that may result from them using a G Suite Marketplace app, so they should evaluate the potential risks themselves after reviewing the app’s permissions request shown when they want to install the app.

Google could make that decision easier by providing information about the external services that might have indirect access to users’ sensitive Google account data if they give the asked-for permissions.

Also, the researchers noted, Google might consider showing the permissions request when the apps are first run or specific functionalities are used for the first time (instead of when they are installed), as users are more likely to understand them and make a personally favorable decision at that time.

Google fixes Android flaws that allow code execution with high system rights

Google fixes Android flaws that allow code execution with high system rights

Google has shipped security patches for dozens of vulnerabilities in its Android mobile operating system, two of which could allow hackers to remotely execute malicious code with extremely high system rights.

In some cases, the malware could run with highly elevated privileges, a possibility that raises the severity of the bugs. That’s because the bugs, located in the Android System component, could enable a specially crafted transmission to execute arbitrary code within the context of a privileged process. In all, Google released patches for at least 34 security flaws, although some of the vulnerabilities were present only in devices available from manufacturer Qualcomm.

Anyone with a mobile device should check to see if fixes are available for their device. Methods differ by device model, but one common method involves either checking the notification screen or clicking Settings > Security > Security update. Unfortunately, patches aren’t available for many devices.

Two vulnerabilities ranked as critical in Google’s June security bulletin are indexed as CVE-2020-0117 and CVE-2020-8597. They’re among four System flaws located in the Android system (the other two are ranked with a severity of high). The critical vulnerabilities reside in Android versions 8 through the most recent release of 11.

“These vulnerabilities could be exploited through multiple methods such as email, web browsing, and MMS when processing media files,” an advisory from the Department of Homeland Security-funded Multi-State-Information Sharing and Analysis Center said. “Depending on the privileges associated with the application, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

Vulnerabilities with a severity rating of high affected the Android media framework, the Android framework, and the Android kernel. Other vulnerabilities were contained in components shipped in devices from Qualcomm. The two Qualcomm-specific critical flaws reside in closed source components. The severity of other Qualcomm flaws were rated as high.