Research Highlights Danger of Insecure Firmware in Line of Coffee Machines
Avast infected the Smarter Coffee machine with ransomaware. (Source: Avast)
An internet-connected coffee machine is the latest IoT device to show security problems. The security firm Avast infected the Smarter Coffee machine with ransomware that causes uncontrollable spinning of its grinder and dispensing of hot water. The only option to stop it? Unplug the machine.
The research augments longstanding warnings about IoT: Device manufacturers are paying little attention to security while pushing devices to the market too soon and may not provide support for very long (see Not the Cat’s Meow: Petnet and the Perils of Consumer IoT)
Avast Senior Researcher Martin Hron describes his reverse-engineering adventure with the second version of the Smarter Coffee machine, made by Smarter Applications Ltd.
Hron found that he could tamper with the firmware without touching the actual device. He could also rig the device to cause its grinder to run uncontrollably and dispense hot water when a user tries to connect it to the home network.
The issues stem from the device’s firmware, which can be replaced without any authorization or authentication. The bug, CVE-2020-15501, affects Smart Coffee machines before the second generation, which are no longer produced.
“Even if we were to contact the vendor, we would likely get no response,” Hron writes in a blog post. “According to their website, this generation of coffee maker is no longer supported. So users should not expect a fix.”
Smarter Applications did not immediately reply to a request for comment.
Firmware Not Encrypted
The Smarter Coffee machines come with a mobile app that can be used to remotely trigger the process to make coffee.
The device allows people to start making coffee with the app, which is where the problems start. Smarter Coffee creates its own local Wi-Fi network using its ESP8266 chip, and that network has a very simple communication protocol, according to Avast.
“As expected, it’s a simple binary protocol with hardly any encryption, authorization or authentication,” Hron writes. “Communication with machines takes place on TCP port 2081.”
Anyone who has access to the network can communicate with the Smarter Coffee machine, and there is no security that prevents anyone who can reach the IP address of the machine to communicate with it, Hron writes. Also, anyone who is within range of the machine can talk to it even if the machine has not yet been connected to the local Wi-Fi network.
Hron found that the firmware is stored within the mobile app. The firmware isn’t encrypted, and the plaintext firmware is uploaded to the flash memory of the device, he says.
Hron found the firmware for the coffee machine as well as another Smarter product within the mobile app. (Source: Avast)
“What is so surprising here is that the update procedure doesn’t use any encryption or signature of the firmware,” Hron writes. “Everything is transmitted in plaintext over an unsecured Wi-Fi connection. The only check is CRC at the end.”
Avast’s initial goal was to infect the Coffee Machine with a cryptocurrency miner. But its processor, an ARM Cortex M0, runs at 8MHz, which makes it unlikely to successfully mine virtual currency.
“We decided to turn the coffee maker into a ransomware machine where a certain trigger initiates the ransom message,” Hron writes. “It looks completely innocent and operates normally until the trigger is hit by an attacker, making it even more surprising.”
Unused memory at the end of the firmware provided a place to put malicious code. Using an ARM assembler, the researchers wrote ransomware that would be triggered when someone tries to connect a machine to the local network.
The Smarter Coffee machine then delivers a surprise. Hot water begins spewing from the machine and the bean grinder starts turning. It also begins beeping while flashing an image of a devil’s head, and a bit.ly URL leads to the ransom message, according to a demo video in the blog post.
A message no coffee machine should ever display (Source: Avast)
“We thought this would be enough to freak any user out and make it a very stressful experience,” Hron writes. “The only thing the user can do at that point is unplug the coffee maker from the power socket.”
There are a couple of minor differences between different Smart Coffee machine versions. Older ones don’t require any interaction by a user to update the firmware, but for newer versions of the firmware, someone needs to push the start button to update it. But the barrier could likely be overcome with social engineering, Hron writes.
Hrom writes that devices such as this coffee machine may still work if they’re no longer getting the required security updates, but there are long-term impacts (see Smart Devices: How Long Will Security Updates Be Issued?)
“We are creating an army of abandoned vulnerable devices that can be misused for nefarious purposes such as network breaches, data leaks, ransomware attack and DDoS,” he writes.
No one likes a heart-stopping AWS bill shock so now there’s a machine learning tool to help detect cost anomalies
AWS has introduced Cost Anomaly Detection, a new feature now in beta driven by machine learning that pledges to notify admins of “unexpected or unusual spend”.
Bill shock is a problem suffered, on occasion, by small and big AWS customers alike. At the small end, there are cases like that of Chris Short, using AWS for his Content Delivery Network (CDN) to scale his website at a cost of around $23.00 per month. One morning he woke up to a bill of $2,657.68, thanks to sharing a 13.7GB file that proved unexpectedly popular. At the other end, the average organisation is over budget for cloud spend by an average of 23 per cent, as we reported here.
While puzzling out what specific cloud services will cost can be a challenge, the big cloud providers are pretty good at showing you where your money has gone with them individually. AWS has a Cost Management service which includes reports, budgets and recommendations, to which the company is now adding Cost Anomaly Detection.
This is configured by adding a Cost Monitor to an AWS account. There are four types of monitor. The generic AWS Services monitor is fully automated. The Linked Account monitor is specific to other AWS accounts linked to an organisation. A Cost Category monitor evaluates spend for a specific category as labelled by the administrator. Finally, a similar Cost Allocation Tag monitor evaluates spend for services with a specific cost tag.
Why cloud costs get out of control: Too much lift and shift, and pricing that is ‘screwy and broken’
Once the type of Cost Monitor is selected, admins set an alert threshold, which is the minimum size of an anomaly that will be notified, the frequency of alerts from as they arise to daily or weekly, and a Simple Notification Service (SNS) through which the alerts are sent. Daily or weekly summaries are sent by email to up to 10 recipients, but admins who opt for individual summaries must go through SNS. This is inexpensive and the first 1,000 emails, or 100 SMS messages, per month are free. Each AWS account can create one AWS Service monitor and up to 101 additional cost monitors.
How good is the anomaly detection? That is the key question and one only customers will be able to answer. The detection engine runs around three times a day and after billing data is processed, which means there is some (potentially expensive) delay. It is driven by a machine learning model, indicaitng scope for both under and over-reporting, but the service is expected to improve as the model is refined.
People may wonder whether it is really in the interests of AWS to provide a service that helps customers spend less money. It is true that, like every company, the cloud providers are always trying to persuade their users to adopt new services or add premium features. That said, none of the experts we have spoken to think that there is deliberate confusion marketing – where pricing is deliberately complex so that customers spend more than they intend – or that providers like having their users waste money. The counter argument holds more sway, that the providers want satisfied customers. The current high demand for cloud services makes this position an easy one to hold.
Anomaly detection is not the complete answer to overpaying. After all, if an organisation paid more than it needed to last month, it is not an anomaly if it does so again. ®
Despite rolling a homegrown translation app with iOS 14, Apple resorts to freebie tool for Dutch Ts-and-Cs waffle
Apple is apparently so skint that it has had to resort to freebie versions of machine-based translation services for its Dutch legalese.
Spotted by a Register reader browsing the small print behind the company’s services, the text “Vertaald met www.DeepL.com/Translator (gratis versie)” can be found lurking just above the “DEFINITIE VAN APPLE” section in the terms-and-conditions doc.
For those not versed in Dutch, that means something like “Translated with www.DeepL.com/Translator (free version)”.
We’d give you our own definition of Apple, but that’s possibly how we wound up on the company’s naughty step.
DeepL Translator [PDF] was first released in August 2017, with the free service (presumably the one that has found favour within Cupertino) being supplemented by DeepL Pro in March 2018. An application to integrate with Windows and macOS arrived in September 2019.
Sadly, it appears that for Apple, maker of the $999 iPhone 11 Pro and $5,999 Mac Pro, the €39.99 per month/per user of DeepL’s “Ultimate” tier subscription is just a little too much. Even the €5.99 tier is a step too far.
Odd, because by opting for the freebie incarnation, Apple has elected to skip DeepL’s maximum data security level for its translation (with end-to-end encryption and immediate text deletion). Stranger still, the recent iOS 14 update added an Apple-built translation app to iPhones. Maybe they should have used that.
The Register contacted Apple to find out why a link to a free translation service had found its way into its terms and conditions. We’re shocked, shocked, to report that the company has yet to respond.
DeepL has also not responded to our request for comment.
A quick look at old versions of the page shows that the translation credit appeared relatively recently, presumably when the likes of Fitness+ were crowbarred in. Apple Fitness+ costs $9.99 a month, a little more than how much DeepL asks per month for its basic tier.
It could be worse. The translation-based snafu could have ended up on a road sign.
As for our reader? He noted: “The translation is rather poor, I didn’t expect this kind of work from Apple.”
If the quality of recent software from the fruity branded biz is anything to go by, this is exactly the sort of thing we’ve come to expect. ®
Uber allowed to continue operating in English capital after winning appeal against Transport for London
Uber has won an appeal against Transport for London’s decision not to renew the ride-hailing app biz’s licence for the English capital, ending a three-year tussle between the pair.
The ruling is the culmination of a hearing at Westminster Magistrates’ Court that ran a fortnight ago from 14 to 17 September, with deputy chief magistrate Tan Ikram declaring today: “Despite [Uber’s] historical failings, I find them, now, to be a fit and proper person to hold a London PHV (private hire vehicle) operator’s licence.”
He added that he did, however, “wish to hear from the advocates on conditions and on my determination as to the length of a licence”.
Uber had itself wanted a five-year licence but, as things stand, has an 18-month license.
The company has yet to comment publicly.
The head-to-head between Uber and TfL kicked off in September 2017 when the authority said private operators needed to meet “rigorous regulations” that are “designed to ensure passenger safety”, and Uber had fallen short of these.
This related to a hole in Uber’s systems that allowed unauthorised drivers to upload their pictures to official drivers’ accounts. Thousands of passenger journeys were undertaken in which the passenger thought their driver was someone else and that driver was not insured. Trips were also made by drivers TfL had previously banned as another hole in the system allowed them to create new accounts with Uber.
The way enhanced Disclosure and Barring Service checks were carried out had also concerned TfL, and the authority claimed Uber had failed to detail its use of Greyball software that could block regulatory bodies from getting full access to its app.
Tim Ward, QC for Uber London Ltd, said during the court proceedings that the company had made technical improvements, including to its governance and document systems.
Judge Ikram said that Uber had presented “no real challenge to the facts as presented by TfL” but he reckoned Uber “challenged the suggestion that breaches were not taken seriously and any suggestion of bad faith on their part”.
He added that in respect of document and insurance fraud, “Uber now seem to be at the forefront of tackling an industry-wide challenge.”
Unsurprisingly, the Licensed Taxi Drivers Association (LTDA) branded today’s decision a “disaster”.
“Uber has demonstrated time and time again that it simply can’t be trusted to put the safety of Londoners, its driver and other road users above profit. Sadly, it seems that Uber is too big to regulate effectively, but too big to fail.
“By holding up their hands and finally accepting some responsibility, Uber has managed to pull the wool over the eyes of the court and create the false impressions that it has changed for the better. A leopard doesn’t change its spots and we are clear that Uber’s underlying culture remains as toxic as it has ever been.”
The App Drivers and Couriers Union (ADCU) claimed the decision had secured the jobs of 43,000 drivers employed by Uber, but it wants to see Uber and TfL learn lessons from the case.
“Uber drivers pay the company 25 per cent of every fare and in return are entitled to expect the company to operate the business in a safe and compliant manner,” said Yaseen Aslam, ADCU president. “Instead Uber has put profit first and placed the livelihood of 43,000 workers at risk.
“It is time for the Mayor of London to break up the Uber monopoly by limiting the number of drivers allowed to register on the Uber platform. The reduced scale will give both Uber and Transport for London the breathing space necessary to ensure all compliance obligations – including workers’ rights – are met in the future.”
Following TfL’s 2017 rejection of Uber’s operator’s licence applications, the biz was granted a 15-month provisional licence in June 2018, and a further two months were granted in September 2019 before TfL decided in November last year that Uber shouldn’t get that licence back.
Updated on 29 September at 12.13 BST to add:
Following publication of this article, Uber sent us a statement:
Jamie Heywood, Uber regional general manager for Northern & Eastern Europe, said: “This decision is a recognition of Uber’s commitment to safety and we will continue to work constructively with TfL. There is nothing more important than the safety of the people who use the Uber app as we work together to keep London moving.”®
On Executive Order 12333
Mark Jaycox has written a long article on the US Executive Order 12333: “No Oversight, No Limits, No Worries: A Primer on Presidential Spying and Executive Order 12,333“:
Abstract: Executive Order 12,333 (“EO 12333”) is a 1980s Executive Order signed by President Ronald Reagan that, among other things, establishes an overarching policy framework for the Executive Branch’s spying powers. Although electronic surveillance programs authorized by EO 12333 generally target foreign intelligence from foreign targets, its permissive targeting standards allow for the substantial collection of Americans’ communications containing little to no foreign intelligence value. This fact alone necessitates closer inspection.
This working draft conducts such an inspection by collecting and coalescing the various declassifications, disclosures, legislative investigations, and news reports concerning EO 12333 electronic surveillance programs in order to provide a better understanding of how the Executive Branch implements the order and the surveillance programs it authorizes. The Article pays particular attention to EO 12333’s designation of the National Security Agency as primarily responsible for conducting signals intelligence, which includes the installation of malware, the analysis of internet traffic traversing the telecommunications backbone, the hacking of U.S.-based companies like Yahoo and Google, and the analysis of Americans’ communications, contact lists, text messages, geolocation data, and other information.
After exploring the electronic surveillance programs authorized by EO 12333, this Article proposes reforms to the existing policy framework, including narrowing the aperture of authorized surveillance, increasing privacy standards for the retention of data, and requiring greater transparency and accountability.
Feds warn foreign disinformation will be spamming US voters well after the November election to sow discord and doubt
In Brief Foreign-backed disinformation campaigns will spread fake news about the results of the upcoming US election in an effort to sow doubt and outrage among the American public.
This is according to an alert issued by the FBI and Department of Homeland Security this week. The two agencies believe that in the immediate aftermath of the presidential election on November 3, Americans will be bombarded with false stories about the vote tally, reports of voter fraud, and other issues that would stoke division as the country awaits official election results – a process that could take weeks.
Unlike the 2016 election, when most of the disinformation was sprayed out in the run-up to the vote, this cycle will aim to even make people question whether the results of the vote are valid, the alert states. People are urged to check their facts carefully with multiple sources and on official government websites.
“The increased use of mail-in ballots due to COVID-19 protocols could leave officials with incomplete results on election night,” the agencies warned.
“Foreign actors and cybercriminals could exploit the time required to certify and announce elections’ results by disseminating disinformation that includes reports of voter suppression, cyberattacks targeting election infrastructure, voter or ballot fraud, and other problems intended to convince the public of the elections’ illegitimacy.”
ATM skimming crew busted
The DOJ has indicted nine people it says operated a string of ATM skimmer operations netting more than $100,000 in theft.
The crew, it is said, placed “skimmer” devices over the card readers of ATMs and collected the card information of people who used the kiosks. They would then yank the skimmers and encode the data onto blank cards which they could use or sell to others.
This was done between March 2019 and June 2020 across a string of states in the southeastern US: Florida, Louisiana, Georgia, and Mississippi, as well as in New York state.
Each of the nine have now been indicted on one federal count of conspiracy to commit device fraud. Police have also reportedly arrested other suspected members of the gang.
You’re never going to believe this, but Cisco has patched some bugs
The latest patch bundle from Switchzilla is a hefty one, containing a total of 42 CVE-listed vulnerabilities across various networking gear.
Fortunately, none of the fixes are for issues deemed to be critical problems, but 29 are considered high risk and should be patched as soon as possible.p>
These include a firewall denial of service bug, a code execution flaw, and an arbitrary file overwrite in IOS XE appliances, two denial of service bugs in Aironet Access Points, and denial of service in the Catalyst 9200 series switches.
Teen hacker bags $25K payout for Instagram bug find
A 14 year-old Brazilian developer has netted himself a nice payday from Facebook, thanks to a critical bug find in Instagram.
Andres Alonso says that he stumbled upon the cross-site scripting flaw by accident while he was working on his own mobile app.
While wading through some integration code with Instagram’s AR filter creator, he figured out that someone could redirect the URL a filter links to without the user getting any notification. At the time, though, he couldn’t quite get a proof-of-concept to work and show it was a complete XSS vulnerability.
Stil, Alonso reported the issue to Facebook, whose security team confirmed that it was indeed a bug that would allow for dangerous cross-site-scripting and decided to award the teen a tidy $25,000 bounty. Facebook’s crew said the dodgy code could be used in an XSS attack against Instagram but reckoned it hadn’t been used in the wild.
“I have to thank Facebook for making a little push in my report escalating to an XSS,” he said.
It’s 2020, and we’re still trying Silk Road cases
It has been more than five years since Silk Road boss Ross Ulbricht was sent to prison for a double life sentence plus 40 years without the possibility of parole, and US authorities are still trying people tied to the notorious drugs market.
This time, it’s programmer Michael Weigand, who pled guilty to lying to federal investigators about his role in the market.
Specifically, Weigand admitted that he was actually involved in helping suss out potential security holes in the site and that he worked with both Ulbricht and Silk Road advisor Roger Thomas Clark.
Additionally, Weigand admitted to flying to London to meet one of Clark’s friends under the guise of starting a marijuana seed business, but instead going to Clark’s London residence to destroy evidence.
“When Weigand was questioned by law enforcement in 2019, he falsely claimed not to have done anything at all for Silk Road,” US Attorney Audrey Strauss. “For his various false statements, Weigand now faces potential prison time.” ®
Around 40 per cent of staff in British and American corporations have access to sensitive data that they don’t need to complete their jobs, according to recent research.
In a survey commissioned by IT security firm Forcepoint of just under 900 IT professionals, 40 per cent of commercial sector respondents and 36 per cent working in the public sector said they had privileged access to sensitive data through work.
Worryingly, of that number, about a third again (38 per cent public sector and 36 per cent private) said they had access privileges despite not needing them. Overall, out of more than 1,000 respondents, just 14 per cent from the private sector thought their org was fully aware of who had the keys to their employers’ digital kingdoms.
Carried out by the US Ponemon Institute, a research agency, the survey also found that about 23 per cent of IT pros across the board reckoned that privileged access to data and systems was handed out willy-nilly, or, as Forcepoint put it in a statement, “for no apparent reason”.
Access management is a critical topic for IT security bods, especially as COVID-19-induced remote working introduces challenges for the monitoring of data access and intra-org flows.
In a finding bound to shore up frontline workers’ opinions of each other, fully half of respondents (49 per cent public sector, 51 per cent private) expressed the view that users with elevated access privs would browse through data “because of their curiosity”, while just over 40 per cent thought their co-workers could be “pressured” to share login credentials.
More than half thought incident-based security tools yielded false positives as well as too much data “than can be reviewed in a timely fashion”, revealing that workers think gotta-log-em-all security tools may be more of an obstacle to finding and plugging system breaches – or malicious people exfiltrating valuable data.
“To effectively understand the risk posed by insiders, it takes more than simply looking at logs and configuration changes,” said Nico Popp, chief product officer at Forcepoint, in a canned statement.
“Incident-based security tools yield too many false positives; instead IT leaders need to be able to correlate activity from multiple sources such as trouble tickets and badge records, review keystroke archives and video, and leverage user and entity behaviour analytics tools. Unfortunately, these are all areas where many organizations fall short.”
The survey took responses from 755 UK and 1,128 American workers in the public and private sectors. ®
A month can be a long time in space exploration. Since we last spoke to Rocket Lab’s Peter Beck, scientists have published results that hint at life in the clouds of Venus, while Beck’s rocketeers popped a Photon demonstrator into Earth orbit.
CEO Beck talked with us about his plans for a privately funded Venus mission a month ago, but was careful with his words, laughing “otherwise I get headlines like ‘Pete’s searching for aliens'”.
The speculation generated by the Royal Astronomical Society press briefing a few short weeks later generated far more lurid copy about what might be floating about in the atmosphere of Venus. Beck’s passion for the mission remains, however, undimmed.
With Flight 14 safely away, replete with a surprise demonstrator of Rocket Lab’s Photon spacecraft, Beck was happy to go into detail on what he planned to launch to Venus in May 2023.
The mission profile is deceptively simple. The interplanetary version of the Photon will undertake a voyage of around 160-180 days to Venus. The probe will detach and, as Photon performs a flyby of the planet, plunge into the atmosphere at approximately 11 kilometres per second, transmitting data as it goes.
This is where the real fun starts.
“We get around 300 seconds of really interesting time in the region that everybody’s interested in,” explained Beck, “and the real challenge right now is the instrument. We’re going there to look for signs of life, and in order to design the instrument you have to make some assumptions about what that life is.”
Beck’s 27kg probe will indeed carry a single instrument weighing in at around 3kg, rather than the multitude of gadgets seen on other spacecraft. Instead of a single, big mission every decade or so, Beck plans to send a multitude of lighter, cheaper spacecraft, iterating the payload as results come in.
“It’s a different way of doing planetary science,” he asserted. “The approach I personally prefer is: ‘Let’s do a bunch of missions for tens of millions of dollars, and let’s do them really, really regularly, and frequently’.
“The ability to kind of increment the science is what’s really interesting. You can go there with a hypothesis, test your hypothesis, and go, ‘Well, that was wrong,’ and go back again, with a new hypothesis and iterate.”
Beck speculated that it might take 10 missions or more to conclusively prove things one way or another: “If we follow the traditional model… I haven’t got 100 years to wait, I want that question answered now!”
The numbers, in terms of the billion-dollar missions often seen heading to the stars, look good (comparatively speaking). Rocket Lab is charging NASA $10m for the lunar version of the Photon due to launch next year, including the Electron on which it will ride. The $30m-$50m for the more complex mission to Venus is, according to Beck, “a bargain”.
Of course, he would say that.
Beck expects to fly at least one more Photon before NASA’s lunar mission (“they’re just a drop-in for the kick stage,” he said, “as long as there is mass margin on the flight”) and more are planned in order to refine the design ahead of that first mission to Venus.
Having successfully returned to flight, Rocket Lab has a busy few months coming up. Its US launchpad will see its first mission once the paperwork for the abort system is complete and the second New Zealand facility is nearing completion. Additional launchpads might feature in the future if the company is asked for inclinations below 37 degrees.
It also hopes to see the Electron of Flight 17 return to Earth safely via parachute.
But looking for evidence of life on Venus is the ultimate name of the game. “What galvanised my interest in space,” said Beck, “is: ‘Are we alone? Is life in the universe unique, or is it prolific?’
“The evidence we have now is that we are the only life in the universe. If you can find life [or evidence of life] on Venus then you fundamentally change that data point.
“And if you have the capability to go and try and answer that question, it’s just totally unacceptable to not try.” ®
A cryptocurrency exchange called KuCoin says it has been cracked, with over $100m of assets misappropriated.
The Register last covered KuCoin when it was mentioned by the Bitcoin-burgling cybercrooks who hacked a bunch of prominent Twitter users.
The Seychelles-based outfit, founded in 2017, proudly boasts of its venture capital backers who clearly admire its services facilitating trading of “numerous digital assets and cryptocurrencies”. And on Saturday it advised users that it “detected some large withdrawals since September 26, 2020 at 03:05:37 (UTC+8)” and that an internal security audit revealed “part of Bitcoin, ERC-20 and other tokens in KuCoin’s hot wallets were transferred out of the exchange, which contained few parts of our total assets holdings. The assets in our cold wallets are safe and unharmed, and hot wallets have been re-deployed.”
The company also promised that any losses would be covered by insurance, but also advised that deposit and withdrawal services would be suspended pending a security review.
A later update included an FAQ in which customers asked why some of the withdrawals continued even after the first incident notification was posted. KuCoin assured customers it conducted those transactions itself and advised that restoration of withdrawal functions could take a week. In the volatile world of cryptocurrency, a week can be the difference between a win and a bust.
A Monday update, the latest, revealed the scale of the hack as KuCoin identified over $130m of assets. It also describes work among a number of crypto players to identify suspicious transactions, freeze transactions, and even lists some addresses suspected of involvement in the heist.
“KuCoin has been in touch with a growing number of industry partners to take tangible actions, thanks to all of you for your support!,” the statement concluded.
However, the latest statement does not offer any further information on the cause of the incident, remediation steps, or restoration times.
So there you have it, dear reader: a venture-backed startup, based in a tax haven, demonstrating the future of money in all its glory.
And in the background, China deciding that its own digital currency will be run only by its biggest banks with new payment players like Alibaba not allowed anywhere near its innermost workings. ®
Who, Me? The Register’s Who, Me? column dips a toe into the world of high finance and iffy numbers as a reader realises that freebies aren’t always A Good Thing.
“Tom”, for that is certainly not his name, was toiling away in the machinery of a large IT consultancy back at the start of the century, dealing with data migration and specialising in SAP systems.
“The company’s Random Project Allocator™ assigned me to a project at an internet provider,” he told us. For the purposes of this story, and for reasons that will become very clear, we’re going to call them “NaughtyCo”.
“The project,” Tom went on, “was to design, build, test, and install a new non-SAP-based billing and customer service system.”
A classic “large IT consultancy” project for sure, although this was the first time Tom and his team had laid hands on it. Another company had had an earlier crack at it, but the inevitable balls-up had resulted in a pulled go-live date.
“We had been called in to sort out the testing and data migration to get it over the line,” he explained. The heroes of the hour.
Tom was tasked with leading the data migration, performing a trial cutover to make sure things would work, and then pressing the big red button for the production environment. The data would then be reconciled.
Those expecting the usual oopsie involving mixing up test and production can relax. This wasn’t Tom’s first rodeo. He did, however, find something decidedly whiffy in NaughtyCo’s data.
We don’t need maintenance this often, surely? Pull it. Oh dear, the system’s down
Having worked out what the migration involved, Tom was dismayed to find that the previous effort was a mess of manual workarounds and what he charitably called “Excel-based wizardry” lurking behind the scenes. It was little wonder that things had been halted, and he and his team set about picking up the pieces.
It took a while, but eventually trials of the migration could be run. Tom produced a weekly report for management showing how many customers his system has successfully moved across, and how many had failed. The total came to several hundred thousand.
Unfortunately, Tom’s total was light by tens of thousands, according to a worried project manager. His figures must be wrong.
The customer count was significant since it was reported to the stock exchange and affected the value of the company. Perplexed, Tom went back to look at his figures in search of the missing customers.
“I checked the numbers and came to some interesting conclusions,” he said.
“Depending on your point of view we were both right. My figures represented the number of customers that paid money to NaughtyCo; the figure my boss was using represented the number of internet accounts. The difference was a surprisingly large number of staff accounts, test accounts, free accounts and inactive accounts.”
Well, this was a bit awkward. Feeding false information to the stock exchange smells a bit like fraud. Fortunately for Tom, “I had exposed the situation and not caused it.”
His boss was not happy and swore him to secrecy while she headed upstairs to discuss matters with the bigger bosses.
A plan was hatched. The company would spend the next few months gradually aligning the numbers reported to the stock exchange to match reality, but carefully so as to avoid frightening the horses. The tens of thousands of mystery accounts were shown the sharp end of the axe in the meantime.
As for Tom, “when we cutover to production the migration went smoothly and there were no more surprises regarding the number of customers.
“It just goes to prove the old adage that there are lies, damn lies, and statistics.”
Ever discovered an inconvenient truth? Did you nudge it under the carpet or sling it from the rooftops? Share all with an email to Who, Me? ®
Open-source software advocate Eric S Raymond has penned an argument that the triumph of Linux on the desktop is imminent because Microsoft will soon tire of Windows.
Raymond’s argument, posted to his blog late last week, kicked off with some frank admiration for Windows Subsystem For Linux, the tech that lets Linux binaries run under Windows. He noted that Microsoft is making kernel contributions just to improve WSL.
Raymond is also an admirer of software called “Proton“, an emulation layer that allows Windows games distributed by Steam to run under Linux.
Raymond rated Proton as “not perfect yet, but it’s getting close”.
His next item of note was Microsoft’s imminent release of its Edge browser for Linux.
That collection of ingredients, he argued, will collide with the fact that Azure is now Microsoft’s cash cow while the declining PC market means that over time Microsoft will be less inclined to invest in Windows 10.
“Looked at from the point of view of cold-blooded profit maximization, this means continuing Windows development is a thing Microsoft would prefer not to be doing,” he wrote. “Instead, they’d do better putting more capital investment into Azure – which is widely rumored to be running more Linux instances than Windows these days.”
Raymond next imagined he was a Microsoft strategist seeking maximum future profits and came to the following conclusion:
Over time, Raymond reckoned, Windows emulation would only be present to handle “games and other legacy third-party software”. And eventually Microsoft will get so focused on Azure, and so disinterested in spending money on Windows, that it will ditch even the Windows emulation layer.
“Third-party software providers stop shipping Windows binaries in favor of ELF binaries with a pure Linux API … and Linux finally wins the desktop wars, not by displacing Windows but by co-opting it.”
The end. ®
Webcast Whether you’re into cybersecurity or application development, you probably also like lists, which means you probably love the OWASP Top 10.
The list was first posted by the security non-profit back in 2003, and has been updated every few years since, securing its reputation as the first step for developers towards more secure coding.
The latest update is due this autumn, and needless to say, it comes amidst a time of extraordinary change, both in the world of infosec, and the wider world of tech and business.
Organizations are grappling with the existing challenges of digital transformation, the shift to the cloud and the continued industrialisation of cybercriminality and, more recently, the disruption caused by the sudden migration of large parts of the economy to home working.
Throw in the explosion in the use of open source components, the role of containers, and new approaches to integrating dev, sec and ops, and you’ve got a scarily wide attack surface for hackers to play with. So, before the results are read out, you really should join us on September 29, at 11am UK time for a webcast brought to you by F5 and dedicated to the OWASP Top 10.
El Reg’s broadcasting maestro Tim Phillips will be joined by F5’s senior threat research evangelist David Warburton, and together they’ll chewing over the context of this year’s upcoming list.
Yes, there will be insight on what changes to expect and whether there’s likely a new number one, or whether injection flaws are set to be the cybersec equivalent of Bryan Adams, Queen…or Drake.
But Tim and David will also dig deeper into why the OWASP Top 10 remains essential to maintaining your security posture, and how you can use it effectively to stay ahead of the curve – and the hackers and cybercriminals looking to exploit that same list of vulns.
They’ll also be exploring what the changing IT landscape – particularly that open source and cloud native shift – means for both the Top 10 and your own efforts to protect your organisation.
This all happens right here on El Reg. All you need to do is register here, and we’ll serve up Tim and David, to a screen near you, whether it’s at work, at home or somewhere in between.
Maggie Jauregui’s introduction to hardware security is a fun story: she figured out how to spark, smoke, and permanently disable GFCI (Ground Fault Circuit Interrupter – the two button protections on plugs/sockets that prevent you from electrocuting yourself by accident with your hair dryer) wirelessly with a walkie talkie.
“I could also do this across walls with a directional antenna, and this also worked on AFCI’s (Arc Fault Circuit Interrupts – part of the circuit breaker box in your garage), which meant you could drive by someone’s home and potentially turn off their lights,” she told Help Net Security.
Jauregui says she’s always been interested in hardware. She started out as an electrical engineering major but switched to computer science halfway through university, and ultimately applied to be an Intel intern in Mexico.
“After attending my first hackathon — where I actually met my husband — I’ve continued to explore my love for all things hardware, firmware, and security to this day, and have been a part of various research teams at Intel ever since,” she added. (She’s currently a member of the corporation’s Platform Armoring and Resilience team.)
What do we talk about when we talk about hardware security?
Computer systems – a category that these days includes everything from phones and laptops to wireless thermostats and other “smart” home appliances – are a combination of many hardware components (a processor, memory, i/o peripherals, etc.) that together with firmware and software are capable of delivering services and enabling the connected data centric world we live in.
Hardware-based security typically refers to the defenses that help protect against vulnerabilities targeting these devices, and it’s main focus it to make sure that the different hardware components working together are architected, implemented, and configured correctly.
“Hardware can sometimes be considered its own level of security because it often requires physical presence in order to access or modify specific fuses, jumpers, locks, etc,” Jauregui explained. This is why hardware is also used as a root of trust.
Hardware security challenges
But every hardware device has firmware – a tempting attack vector for many hackers. And though the industry has been making advancements in firmware security solutions, many organizations are still challenged by it and don’t know how to adequately protect their systems and data, she says.
She advises IT security specialists to be aware of firmware’s importance as an asset to their organization’s threat model, to make sure that the firmware on company devices is consistently updated, and to set up automated security validation tools that can scan for configuration anomalies within their platform and evaluate security-sensitive bits within their firmware.
“Additionally, Confidential Computing has emerged as a key strategy for helping to secure data in use,” she noted. “It uses hardware memory protections to better isolate sensitive data payloads. This represents a fundamental shift in how computation is done at the hardware level and will change how vendors can structure their application programs.”
Finally, the COVID-19 pandemic has somewhat disrupted the hardware supply chain and has brought to the fore another challenge.
“Because a computing system is typically composed of multiple components from different manufacturers, each with its own level of scrutiny in relation to potential supply chain attacks, it’s challenging to verify the integrity across all stages of its lifecycle,” Jauregui explained.
“This is why it is critical for companies to work together on a validation and attestation solution for hardware and firmware that can be conducted prior to integration into a larger system. If the industry as a whole comes together, we can create more measures to help protect a product through its entire lifecycle.”
Achieving security in low-end systems on chips
The proliferation of Internet of Things devices and embedded systems and our reliance on them should make the security of these systems extremely important.
As they commonly rely on systems on chips (SoCs) – integrated circuits that consolidate the components of a computer or other electronic system on a single microchip – securing these devices is a different proposition than securing “classic” computer systems, especially if they rely on low-end SoCs.
Jauregui says that there is no single blanket solution approach to implement security of embedded systems, and that while some of the general hardware security recommendations apply, many do not.
“I highly recommend readers to check out the book Demystifying Internet of Things Security written by Intel scientists and Principal Engineers. It’s an in depth look at the threat model, secure boot, chain of trust, and the SW stack leading up to defense-in-depth for embedded systems. It also examines the different security building blocks available in Intel Architecture (IA) based IoT platforms and breaks down some the misconceptions of the Internet of Things,” she added.
“This book explores the challenges to secure these devices and provides suggestions to make them more immune to different threats originating from within and outside the network.”
For those security professionals who are interested in specializing in hardware security, she advises being curious about how things work and doing research, following folks doing interesting things on Twitter and asking them things, and watching hardware security conference talks and trying to reproduce the issues.
“Learn by doing. And if you want someone to lead you through it, go take a class! I recommend hardware security classes by Joe FitzPatrick and Joe Grand, as they are brilliant hardware researchers and excellent teachers,” she concluded.
Your brand is a valuable asset, but it’s also a great attack vector. Threat actors exploit the public’s trust of your brand when they phish under your name or when they counterfeit your products. The problem gets harder because you engage with the world across so many digital platforms – the web, social media, mobile apps. These engagements are obviously crucial to your business.
Something else should be obvious as well: guarding your digital trust – public confidence in your digital security – is make-or-break for your business, not just part of your compliance checklist.
COVID-19 has put a renewed spotlight on the importance of defending against cyberattacks and data breaches as more users are accessing data from remote or non-traditional locations. Crisis fuels cybercrime and we have seen that hacking has increased substantially as digital transformation initiatives have accelerated and many employees have been working from home without adequate firewalls and back-up protection.
The impact of cybersecurity breaches is no longer constrained to the IT department. The frequency and sophistication of ransomware, phishing schemes, and data breaches have the potential to destroy both brand health and financial viability. Organizations across industry verticals have seen their systems breached as cyber thieves have tried to take advantage of a crisis.
Good governance will be essential for handling the management of cyber issues. Strong cybersecurity will also be important to show customers that steps are being taken to avoid hackers and keep their data safe.
The COVID crisis has not changed the cybersecurity fundamentals. What will the new normal be like? While the COVID pandemic has turned business and society upside down, well-established cybersecurity practices – some known for decades – remain the best way to protect yourself.
1. Data must be governed
Data governance is the capability within an organization to help provide for and protect for high quality data throughout the lifecycle of that data. This includes data integrity, data security, availability, and consistency. Data governance includes people, processes, and technology that help enable appropriate handling of the data across the organization. Data governance program policies include:
- Delineating accountability for those responsible for data and data assets
- Assigning responsibility to appropriate levels in the organization for managing and protecting the data
- Determining who can take what actions, with what data, under what circumstances, using what methods
- Identifying safeguards to protect data
- Providing integrity controls to provide for the quality and accuracy of data
2. Patch management and vulnerability management: Two sides of a coin
Address threats with vulnerability management. Bad actors look to take advantage of discovered vulnerabilities in an attempt to infect a workstation or server. Managing threats is a reactive process where the threat must be actively present, whereas vulnerability management is proactive, seeking to close the security gaps that exist before they are taken advantage of.
It’s more than just patching vulnerabilities. Formal vulnerability management doesn’t simply involve the act of patching and reconfiguring insecure settings. Vulnerability management is a disciplined practice that requires an organizational mindset within IT that new vulnerabilities are found daily requiring the need for continual discovery and remediation.
3. Not “if” but “when”: Assume you’re already hacked
If you build your operations and defense with this premise in mind, your chances of helping to detect these types of attacks and preventing the breaches are much greater than most organizations today.
The importance of incident response steps
A data breach should be viewed as a “when” not “if” occurrence, so be prepared for it. Under the pressure of a critical-level incident is no time to be figuring out your game plan. Your future self will thank you for the time and effort you invest on the front end.
Incident response can be stressful and is stressful when a critical asset is involved and you realize there’s an actual threat. Incident response steps help in these stressing, high pressure situations to more quickly guide you to successful containment and recovery. Response time is critical to minimizing damages. With every second counting, having a plan to follow already in place is the key to success.
4. Your size does not mean security maturity
It does not matter how big you are or the resources your team can access. As defenders, we always think, “If I only had enough money or people, I could solve this problem.” We need to change our thinking. It’s not how much you spend but rather, is that spend an effective use? Does it allow your team to disrupt attacks or just wait to be alerted (maybe)? No matter where an organization is on its journey toward security maturity, a risk assessment can prove invaluable in deciding where and when it needs most improvement.
For more mature organizations, the risk assessment process will focus less on discovering major controls gaps and more on finding subtler opportunities for continuously improving the program. An assessment of a less mature program is likely to find misalignments with business goals, inefficiencies in processes or architecture, and places where protections could be taken to another level of effectiveness.
5. Do more with less
Limited budgets, limited staff, limited time. Any security professional will have dealt with all of these repeatedly while trying to launch new initiatives or when completing day-to-day tasks. They are possibly the most severe and dangerous adversaries that many cybersecurity professionals will face. They affect every organization regardless of industry, size, or location and pose an existential threat to even the most prepared company. There is no easy way to contain them either, since no company has unlimited funding or time, and the lack of cybersecurity professionals makes filling roles incredibly tricky.
How can organizations cope with these natural limitations? The answer is resource prioritization, along with a healthy dose of operational improvements. By identifying areas where processes can be streamlined and understanding what the most significant risks are, organizations can begin to help protect their systems while staying within their constraints.
6. Rome wasn’t built in a day
An edict out of the IT department won’t get the job done. Building a security culture takes time and effort. What’s more, cybersecurity awareness training ought to be a regular occurrence — once a quarter at a minimum — where it’s an ongoing conversation with employees. One-and-done won’t suffice.
People have short memories, so repetition is altogether appropriate when it comes to a topic that’s so strategic to the organization. This also needs to be part of a broader top-down effort starting with senior management. Awareness training should be incorporated across all organizations, not just limited to governance, threat detection, and incident response plans. The campaign should involve more than serving up a dry set of rules, separate from the broader business reality.
Russia has taken the unusual step of posting a proposal for a new information security collaboration with the United States of America, including a no-hack pact applied to electoral affairs.
The document, titled “Statement by President of Russia Vladimir Putin on a comprehensive program of measures for restoring the Russia – US cooperation in the filed [sic] of international information security”, opens by saying “one of today’s major strategic challenges is the risk of a large-scale confrontation in the digital field” before adding: “A special responsibility for its prevention lies on the key players in the field of ensuring international information security (IIS).”
Russia therefore wants to reach agreement with the USA on “a comprehensive program of practical measures to reboot our relations in the field of security in the use of information and communication technologies (ICTs)”.
Putin suggested four actions could set the ball rolling:
- Resuming “regular full-scale bilateral interagency high-level dialogue on the key issues of ensuring IIS”.
- Establishing and maintaining “continuous and effective functioning of the communication channels between competent agencies of our States through Nuclear Risk Reduction Centers, Computer Emergency Readiness Teams and high-level officials in charge of the issues of IIS within the bodies involved in ensuring national security, including that of information”.
- Jointly developing “a bilateral intergovernmental agreement on preventing incidents in the information space similarly to the Soviet-American Agreement on the Prevention of Incidents On and Over the High Seas in force since 25 May 1972”. That agreement aimed to reduce the chance of a maritime incident between the then-USSR and the USA, and included de-escalation measures to stop an incident going nuclear.
- Exchanging “guarantees of non-intervention into internal affairs of each other, including into electoral processes, inter alia, by means of the ICTs and high-tech methods”.
Russia stands accused of interfering in the 2016 US presidential election with widespread use of fake social media accounts. The USA’s Federal Bureau of Investigations last week warned: “Foreign actors and cybercriminals could create new websites, change existing websites, and create or share corresponding social media content to spread false information in an attempt to discredit the electoral process and undermine confidence in US democratic institutions.” On 17 September FBI director Christopher Ray testified before the House Homeland Security Committee Events and named Russia as a nation already interfering in this year’s elections (video below).
It is unclear if Russia’s document elicited a public response from the USA.
The two nations sought a cyber-détente in 2017, when Putin and Trump discussed a Cyber Security unit with unspecified functions and purposes.
Putin & I discussed forming an impenetrable Cyber Security unit so that election hacking, & many other negative things, will be guarded..
— Donald J. Trump (@realDonaldTrump) July 9, 2017
The effort was quickly explained away as a policy thought bubble that was floated without any accompanying detail. The idea deflated soon afterwards, leaving the two nations in their current state of uneasy enmity… ®
The fact that President Putin and I discussed a Cyber Security unit doesn’t mean I think it can happen. It can’t-but a ceasefire can,& did!
— Donald J. Trump (@realDonaldTrump) July 10, 2017
Determining the true impact of a cyber attack has always and will likely be one of the most challenging aspects of this technological age.
In an environment where very limited transparency on the root cause and the true impact is afforded we are left with isolated examples to point to the direct cost of a security incident. For example, the 2010 attack on the Natanz nuclear facilities was and in certain cases is still used as the reference case study for why cybersecurity is imperative within an ICS environment (quite possibly substituted with BlackEnergy).
For the impact on ransomware, it was the impact WannaCry had on healthcare and will likely be replaced with the awful story where a patient sadly lost their life because of a ransomware attack.
What these cases clearly provide is a degree of insight into their impact. Albeit this would be limited in certain scenarios, but this approach sadly almost excludes the multitude of attacks that successfully occurred prior and in which the impact was either unavailable or did not make the headline story.
It can of course be argued that the use of such case studies are a useful vehicle to influence change, there is equally the risk that they simply are such outliers that decision makers do not recognise their own vulnerabilities within the broader problem statement.
If we truly need to influence change, then a wider body of work to develop the broader economic, and societal impact, from the multitude of incidents is required. Whilst this is likely to be hugely subjective it is imperative to understand the true impact of cybersecurity. I recall a conversation a friend of mine had with someone who claimed they “are not concerned with malware because all it does is slow down their computer”. This of course is the wider challenge to articulate the impact in a manner which will resonate.
Ask anybody the impact of car theft and this will be understood, ask the same question about any number of digital incidents and the reply will likely be less clear.
It can be argued that studies which measure the macro cost of such incidents do indeed exist, but the problem statement of billions lost is so enormous that we each are unable to relate to this. A small business owner hearing about how another small business had their records locked with ransomware, and the impact to their business is likely to be more influential than an economic model explaining the financial cost of cybercrime (which is still imperative to policy makers for example).
If such case studies are so imperative and there exists a stigma with being open about such breaches what can be done? This of course is the largest challenge, with potential litigation governing every communication. To be entirely honest as I sit here and try and conclude with concrete proposals I am somewhat at a loss as to how to change the status quo.
The question is more an open one, what can be done? Can we leave fault at the door when we comment on security incidents? Perhaps encourage those that are victims to be more open? Of course this is only a start, and an area that deserves a wider discussion.
As the economic fallout of the COVID-19 crisis continues to unfold, a research from Next Caller, reveals the pervasive impact that COVID-related fraud has had on Americans, as well as emerging trends that threaten the security of contact centers, as we head towards what may be another wave of call activity.
The company’s latest report found that 55% of Americans believe they’ve been a victim of COVID-related fraud, up more than 20% from when the company conducted a similar study in April.
Perhaps even more worrisome is the fact that 59% of Americans claim they haven’t taken any additional precautions to protect themselves from these attacks.
“Even with massive amounts of PII circulating the dark web and so many new opportunities for criminals to exploit because of the pandemic, it’s still alarming that over half of the country thinks they’ve been targeted by COVID-related fraud,” said Ian Roncoroni, CEO, Next Caller.
“Compounding the problem is COVID’s unique ability to distract and disengage people from carefully monitoring their accounts. Criminals who are already well-equipped to bypass security can now operate longer without detection, worsening the impact exponentially.”
Data has shown the clear correlation between the economic fallout of the crisis – specifically stimulus related events – and the meteoric spikes in overall call volumes and the number of high-risk calls taking place inside contact centers across today’s biggest brands.
Fraudsters eager to replicate their initial success
A pending second stimulus package, combined with a clear urgency from Americans around receiving it, indicates that another wave of activity from customers and criminals is on the horizon.
In regards to the latest findings, Roncoroni said, “We have to prepare for a more sophisticated criminal strategy this time around. Rising reports of fraud activity signal not only that fraudsters are eager to replicate their initial success, but that some of those early schemes may just be getting started.
“The phony mailing address unceremoniously added to a bank account in April is likely just the trojan horse for a scheme ready to be set in motion under the cover of the next stimulus package.”
- 55% of Americans believe they’ve been targeted by COVID-related fraud
- Despite that, 59% of Americans claiming that they have not taken any additional precautions to protect themselves from attacks
- Almost 1-in-3 Americans are more worried about becoming a victim of fraud than they are about contracting the virus
- 56% believe brands are equally responsible for providing flexible and accommodating customer service and protecting personal information
- When asked about their view of the next stimulus checks, 41% of Americans said “I really need another check”
- 53% of Americans say that they have already sought out information related to the next round of checks
After several months of working from home, with no clear end in sight, financial risk and regulatory compliance professionals are struggling when it comes to collaborating with their teams – particularly as they manage increasingly complex global risk and regulatory reporting requirements.
According to a survey of major financial institutions conducted by AxiomSL, 41% of respondents said collaborating with teams remains a challenge while working remotely.
“Indeed, businesses might never return to the ‘old normal’, and that has made building data- and technology-driven resilience much more pressing than before the crisis. Our clients have been experiencing heightened regulatory pressures,” he continued.
“Throughout the crisis, we enabled them to respond rapidly to changes in reporting criteria, the onset of daily liquidity reporting, and the Federal Reserve’s emerging risk data collection (ERDC) initiative – that required FR Y–14 data on a weekly/monthly basis instead of quarterly.”
These data-intensive, high-frequency regulatory reporting requirements will continue in the ‘new normal.’ “To future-proof, organizations should continue to establish sustainable data architectures and analytics that enable connection and transparency between critical datasets,” Tsigutkin commented.
“And, as a priority, they should transition to our secure RegCloud to handle regulatory intensity efficiently, bolster business continuity, and strengthen their ability to collaborate remotely,” he concluded.
Key research findings
Remote collaboration is a top operational challenge for financial risk and regulatory pros: For all the talk of work-from-anywhere policies becoming the future of financial services, 41% of the risk and compliance professionals surveyed said collaborating with colleagues while working remotely has been their biggest challenge during the COVID-19 crisis.
This was the most frequently cited challenge, followed by accessing data from dispersed systems (18%), reliance on offshore resources (15%), and reliance on locally installed technology (15%).
Liquidity reporting expected to get harder: New capital and liquidity stress testing requirements are expected to present a much heavier burden on financial firms, with 18% of respondents citing increased capital and liquidity risk reporting as a major challenge they will face over the next two years.
Cloud adoption gets its catalyst: After years of resisting cloud adoption, many North American financial institutions are finally gearing up to make the move. When it comes to regulatory technology spending over the next two years, enhanced data analytics is the top area of focus among 29% of survey respondents. But cloud deployment rose to second place (23%) followed by data lakes (22%) and artificial intelligence and machine learning (20%).
Reduction of manual processes is an operational focus for the next two years: The top risk and regulatory compliance challenge firms see on the road ahead is continuing to eliminate manual processes (29%), followed by improving the transparency of data and processes (21%), and fully transitioning to a secure cloud (13%).
RegTech budgets largely intact heading into 2021: A total of 83% indicated their near-term projects as virtually unimpacted or mostly going forward. And similarly, 81% said their budgets for 2021 remain intact (70%) or will increase (11%).
Senior risk and compliance professionals within financial services company’s lack confidence in the security data they are providing to regulators, according to Panaseer.
Results from a global external survey of over 200+ GRC leaders reveal concerns on data accuracy, request overload, resource-heavy processes and lack of end-to-end automation.
The results indicate a wider issue with cyber risk management. If GRC leaders don’t have confidence in the accuracy and timeliness of security data provided to regulators, then the same holds true for the confidence in their own ability to understand and combat cyber risks.
41% of risk leaders feel ‘very confident’ that they can fulfill the security-related requests of a regulator in a timely manner. 27.5% are ‘very satisfied’ that their organization’s security reports align to regulatory compliance needs.
GRC leaders cited their top challenges in fulfilling regulator requests, as:
- Getting access to accurate data (35%)
- The number of report requests (29%)
- The length of time it takes to get information from security team (26%)
The limitations of traditional GRC tools
The issue has been perpetuated by the limitations of traditional GRC tools, which rely on qualitative questionnaires to provide evidence of compliance. This does not reflect the current challenges from cyber.
92% of senior risk and compliance professionals believe it would be valuable to have quantitative security controls assurance reporting (vs qualitative) and 93.5% believe it’s important to automate security risk and compliance reporting. However, only 11% state that their risk and compliance reporting is currently automated end to end.
96% said it is important to prioritize security risk remediation based on its impact to the business, but most can’t isolate risk to critical business processes composed of people, applications, devices. Only 33.5% of respondents are ‘very confident’ in their ability to understand all the asset inventories.
Charaka Goonatilake, CTO, Panaseer: “Faced with increasing requests from regulators, GRC leaders have resorted to throwing a lot of people at time-sensitive requests. These manual processes combined with lack of GRC tool scalability necessitates data sampling, which means they cannot have complete visibility or full confidence in the data they are providing.
“The challenge is being exacerbated by new risks introduced by IoT sensors and endpoints, which rarely consider security a core requirement and therefore introduce greater risk and increase the importance of controls and mitigations to address them.”
Andreas Wuchner, Panaseer Advisory Board member: “To face the new reality of cyberthreats and regulatory pressures requires many organizations need to fundamentally rethink traditional tools and defences.
“GRC leaders can enhance their confidence to accurately and quickly meet stakeholder needs by implementing Continuous Controls Monitoring, an emerging category of security and risk, which has just been recognised in the 2020 Gartner Risk Management Hype Cycle.”
Made-in-China social video app TikTok has convinced a US judge it should remain in American app stores for the foreseeable future – dodging a ban that would have seen it expelled from Google Play and Apple’s app store from midnight on Sunday US time.
A Sunday order [PDF] by justice Carl J Nichols of the United States District Court for the District of Columbia granted an injunction sought by TikTok to keep its software available for new downloads or updates.
Downloads and updates to existing installations are among the “prohibited transactions” that the Trump administration says no US business will be allowed to conduct with TikTok and Tencent’s WeChat messaging service. Other prohibitions would prevent US carriers from carrying traffic to and from the apps.
TikTok last week sought an injunction against the ban on grounds that it violates constitutional rights to free speech and to petition the US government. WeChat already secured a stay of execution on similar grounds.
Justice Nichols’ order doesn’t explain why he decided to grant the injunction and his reasons for doing so are in a separate document that is currently sealed. His order therefore calls on the administration and TikTok to meet on Monday, US time, to read his reasoning and decide if it can be unsealed and released to the public.
The parties were also ordered to meet by Wednesday, 30 September, and “file a Joint Status Report proposing a schedule for further proceedings” and “address any other issues that they believe will be helpful to the Court”.
The order isn’t a huge win for TikTok because it could still lose in another court and still faces likely expulsion from the USA if the administration doesn’t sign off on its deal to be acquired by Oracle, WalMart and others.
US president Donald Trump has offered not-entirely-consistent views on whether the deal should be allowed to proceed, and is now rather busy trying to secure an unusually speedy confirmation of a Supreme Court Justice, preparing for the first of three presidential debates and fending off a bombshell report of systematic tax evasion – all while managing the most severe public health crisis in a century. ®
Emerging technologies have created amazing new organizational capabilities. But they also bring new complexities, interconnections and vulnerability points. The need for strong cybersecurity is strong. Your defenses need to be stronger.
The Role of (ISC)²
(ISC)² is the world’s largest nonprofit membership association of certified cybersecurity professionals. More than 150,000 members strong, we help train, certify and educate the front lines – the professionals organizations count on to protect their critical assets and mitigate cyber risks.
CISSP – The World’s Premier Cybersecurity Certification
You may know (ISC)² for our CISSP credential – five letters that inspire confidence for businesses around the globe. Like all (ISC)² certifications, the CISSP is accredited and vendor-neutral. It stands out as the premier credential for information security leaders, identifying those who possess the advanced skills required to design, implement and manage a best-in-class cybersecurity program.
Our latest white papers examine the expanding threat landscape and how cybersecurity can drive business growth with the right experts in place. Download the resource that speaks to you as a professional or team leader ready to secure the future.