We reveal what’s inside Microsoft’s Azure Govt Secret regions… wait, is that a black helico–

Redmond hopes to lure Uncle Sam’s spy agencies, military away from Amazon

Microsoft has set up two new Azure cloud regions in the US – dubbed Azure Government Secret regions – to store data involving American national security. The services are in private preview, and are pending official government accreditation.

The Windows giant hopes the pair of regions will obtain a Dept of Defense Impact Level 6 badge, which would allow it to store and process information classified as secret. It is also looking for Intelligence Community Directive (ICD 503) accreditation.

Each region consists of at least two availability zones, and each availability zone lives on its own individual server farm.

The Azure Government Secret data centers are so secret Microsoft doesn’t disclose their location, only stating on Thursday that they are located more than 500 miles apart.

The new regions join Microsoft’s six existing Azure Government regions, which have now been certified as IL5, which means they are suitable for controlled but unclassified information.

“With our focus on innovating to meet the needs of our mission-critical customers, we continue to provide more PaaS features and services to the DoD at IL5 than any other cloud provider,” said Lily Kim, general manager for Azure.

The tech titan claims its cloud services are used by nearly 10 million people toiling for Uncle Sam, across more than 7,000 government agencies.

The Pentagon Building outside Washington, DC

Uncle Sam █████████ cloud so much, AWS █████████ it another kinda-secret data center

READ MORE

So what makes a data center fit for restricted and secret government info? Microsoft said it’s down to secure, native connections to classified networks, hardware encryption and storage of cryptographic keys, storage and compute isolation capability – with every virtual machine sitting on its own physical node – and personnel consisting of security-cleared US citizens, among other things.

The announcement this week comes at a time when the US government is working hard to consolidate and modernize its IT footprint, in line with the requirements of the Federal Technology Acquisition Reform Act (FITARA) and its extension, the Data Center Optimization Initiative (DCOI).

Since 2014, these initiatives have helped 24 federal agencies close 6,250 data centers – although the definition of a data center, in this case, is any room with at least one server in it.

More recently, the 2018 ‘Federal Cloud Computing Strategy — Cloud Smart‘, the first cloud policy update in seven years, promoted public cloud as a more than adequate alternative to on-premises data centers run by government agencies.

“To keep up with the country’s current pace of innovation, President Trump has placed a significant emphasis on modernizing the federal government,” said Suzette Kent, federal CIO.

“By updating an outdated policy, Cloud Smart embraces best practices from both the federal government and the private sector, ensuring agencies have capability to leverage leading solutions to better serve agency mission, drive improved citizen services and increase cyber security.”

Another cloud vendor that is competing for government secrets is AWS: Microsoft beat its competitor to the punch when it became the first hyperscale cloud vendor to obtain Impact Level 5 provisional authorization, but it was AWS that managed to open first cloud data centers with provisional IL6 in late 2017.

Both cloud behemoths are competing for JEDI, the controversial ten-year contract to provide cloud services to the Pentagon, worth up to $10bn and designed for just one vendor. Amazon has been widely seen as the front-runner in the race, while IBM and Oracle both complained that the contract was anti-competitive; Oracle even challenged it in a federal court. ®

Sponsored: Becoming a Pragmatic Security Leader

Huawei thanks US for ‘raising 5G awareness’ by banning firm’s wares

It’s like talking to my children, sighs marketing bigwig

Huawei top brass took to the stage in Shenzhen this week to insist that everything was fine and dandy in the company’s world, despite the shrieking from US lawmakers.

In front of an audience of 750, deputy chairman Ken Hu described 2018 as an “eventful” year for the company and thanked the assembled media for “paying so much attention” to the Chinese outfit.

uncle sam

US: We’ll pull security co-operation if you lot buy from Huawei

READ MORE

If 2018 was eventful, it’ll be interesting to see how Hu describes 2019. The US has ramped up the rhetoric by threatening its pals with a withdrawal of security cooperation if they buy kit from the company, and the UK’s Huawei Cyber Security Evaluation Centre (HCSEC) gave the company a good kicking over some decidedly whiffy coding practices.

On the plus side for Hu, at least the likes of Germany have managed to resist the increasingly shrill demands from the US to ditch the company’s gear.

Still, Hu was happy with how the whole 5G thing was going and flung up slides showing that in the first year of the technology, there were 100,000+ 5G base stations and 40+ phones. Except there aren’t.

Handsets remain a scarce commodity. Huawei’s own 5G flagship, the foldable Mate X, was conspicuous by its absence (although as things have turned out, that might not be such a bad thing). Catherine Chen, president of the company’s Public Affairs and Communications Department, explained that Hu was really talking about contracts signed with telcos, and told The Register that Huawei had actually shipped 70,000 base stations, and the deployment of those were up to the telcos concerned.

Exaggeration and hyberbole about 5G? Say it ain’t so!

The orange elephant in the room did, however, have to be addressed, and Hu stated the company believes that “trust or distrust should be based on fact”, pointing to the company’s new transparency centre in Brussels and the multibillion-dollar transformation plan to deal with its dodgy code. He also congratulated the European Union on its privacy efforts while pointedly ignoring the US.

Let’s talk security turkey

John Suffolk, Huawei’s security boss and former UK government IT bigwig, was a little more blunt.

While he accepted that Huawei’s code contained a lot of “clutter” that had built up over the years, he felt the company was being singled out for special attention “because we’re a Chinese company, the spotlight will always be on us”.

He also promised that the transformation plan, details of which have been infuriatingly limited up to now, would be presented in the coming months.

Suffolk, of course, has the final veto on Huawei’s products from a security standpoint, with the company’s Independent Cyber Security Lab (ICSL) reporting to him with data from internal testing.

The company also allows its code to be inspected (as by HCSEC), although as for open-sourcing the whole lot and being done with it, Suffolk scoffed: “Do you honestly expect we’re going to open-source our crown jewels?”

Though Suffolk insisted the company will comply with every certification requirement and standard set by its customers, and that the company would be “as open as possible”, he said: “Some people you’re never going to convince.” He went on to say that countries such as the US were doing their citizens a “disservice” by barring Huawei from the marketplace.

As well as putting an America-sized dent in the giant’s revenues.

Certainly, the US government presents a challenge. Chief marketing officer Peter Zhou described explaining the technology to officials as similar to how he would explain it to his children, resorting to PlayStation metaphors to get the point across.

Zhou also pointed out that the barring of Huawei from the US marketplace would not make the country a leader in 5G. The spectrum allocation, for one thing, will make international roaming a tad tricky without phones becoming more complicated (and expensive).

However, the furore generated by the US, which Chen said after a decade of rumbling turned “radical” during the Trump presidency, has brought some benefits. She reckoned that the controversy had done much to publicise and raise awareness of 5G and increase the size of the market.

While Washington’s shenanigans were “not the biggest problem” faced by the company over its 30-year history, Chen said Huawei would still very much like in on the US market, even though many telco contracts have now been signed with other 5G providers.

The company is therefore putting its faith in the lumbering US judicial system, which Chen described, without a trace of irony, as “fair, just and transparent”.

In the meantime, with regard to continuing accusations of Chinese government interference, the company continues to trot out its corporate line, Jerry Maguire-style: “Show us the evidence.” ®

Sponsored: Becoming a Pragmatic Security Leader

How to tame tech’s terrifying Fragmented Data Monster – the Cohesity way

As files pile up, customer numbers grow, storage systems spread, it’s only going to get worse

Sponsored One customer, one customer order, right? Wrong.

Sales will have a copy of the original as will shipping, who have probably copied it to a desktop. That’s two or three, right there. Credit control will receive a copy, via email, that might get stored on a network drive with a copy then sent to accounts receivable. Then the backup and archival process started.

Repeat this, every day, and even the smallest company is soon swimming in copies of the same document.

Welcome to the world of mass data fragmentation – copying, slicing, dicing and then storing – in a multitude of locations – something that’s termed “secondary data.” That is, the data that goes outside transaction systems in production databases. We’re thinking about data like backups, file and object storage, non-production test and development files, search and analytics. Archived data, too.

Why should you care? One reason is the hidden cost of storing those duplicate. If, and when, it comes to consolidating you won’t know where to begin. And consolidate you should, for how can you be sure that everybody has the absolutely latest and definitive view or understanding of the customer?

Stuart Gilks, systems engineering manager at data management company Cohesity, reckons mass data fragmentation is a function of data volume, infrastructure complexity, the number of physical data locations and cloud adoption. And guess what? They’re all growing at an astounding rate.

IT systems aren’t becoming any less complex, thanks to a combination of organic and inorganic IT growth. A succession of different project owners and IT teams’ layer different IT systems atop each other over the years, each of which contains secondary data and few of which talk to each other easily.

As far as data volume goes, enterprises are producing data more quickly than they can manage it. Last year, Cohesity surveyed 900 senior decision makers from companies across six countries, with 1,000 employees or more. Ninety eight per cent of them said their secondary storage had increased over the prior 18 months, and most said that they couldn’t manage it with existing IT tools.

Not content with generating more data than they can handle, companies are starting to fling it around more. They started by storing it with single cloud providers, but quickly gravitated to hybrid cloud and multi-cloud systems. Eighty five per cent are using multi-cloud environments, says IBM.

These multi-cloud environments spread data over different domains, each of which usually has its own data management tool. Oh, joy.

“You’ve got a proliferation of locations and you’ve got a proliferation of silos that have a specific purpose. You’re almost generating a problem in three dimensions,” Gilks says. “This makes it difficult to manage systems, risk and efficiency and deliver business value, especially at a time when budgets aren’t going up.”

Not all this data duplication is haphazard, mind. Organisational and legal drivers sometimes force companies to fragment their secondary data. Compliance or security concerns may make it necessary to draw hard lines between different departments or customers, serving each with different copies of the same data.

Multi-tenancy is a good example. You may provide a service to one company or department but be forced to isolate their data completely from company B in the same computing environment, even if some of it is identical. Other reasons may stem from office politics. Server huggers lurk in every department. We said there might be well-understood reasons for creating data silos, but we didn’t say they were all good.

Impact

This mass data fragmentation problem creates several impacts that can cripple a business.

The first is a lack of visibility. This secondary data is valuable because there is a wealth of corporate value locked up inside it. Analytics systems thrive on data ranging from call centre metadata to historical sales information. If data is the new oil, then carving it up into different silos chokes off your fuel supply.

The second is data insecurity. Much of that secondary data will be sensitive, including personally-identifiable customer information. Someone who stumbles on the right piece of secondary data in your organization could find and target members of your skunkworks product research team, or customer list, or email everyone in your company with a list of senior management salaries. None of these outcomes are good.

The third, linked impact is compliance. GDPR was a game-changing regulation that made it mandatory to know where your data is. When a customer demands that you reproduce all the data you hold on them, you’d better be able to find it. If it’s smeared across a dozen corporate systems and difficult to identify let alone retrieve, you’re in trouble.

Bloat and drag

Then, there’s the effect on business agility. Developing new systems invariably means supporting and drawing on secondary data sources. The Cohesity survey found that 63 per cent of respondents had between four and 15 copies of the same data, while 10 per cent had 11 copies or more. Those files aren’t just located on a company’s premises; they’re also stored off-site.

Developing new systems while ensuring integrity across all of those file copies might feel like pulling a garbage dump up a mountain. It would not only affect IT’s agility to support business requirements, but would bloat development budgets too.

The numbers bear this out. Forty eight of those answering Cohesity’s survey spent at least 30 per cent (and up to 100 per cent) of their time managing secondary data and apps. The average IT department spent four months of the working year grappling with fragmented secondary data.

This leaves IT employees feeling overworked and underappreciated. Over half of all survey respondents said that staff were working 10 hours of overtime or more to deal with mass data fragmentation issues. Thirty eight percent are worried about “massive turnover” on the IT team.

Mass data fragmentation also affects a company’s immediate ability to do business. Ninety one of fretted about the level of visibility that the IT team had into secondary data across all sources. That translates directly into customer blindness. If the IT team can’t pull together customer data from different silos, then how can they draw on it for operations like CRM or customer analytics?

Taming in action

So much for the data fragmentation problem. Now, how do you solve it?

The least drastic version involves point systems that manage secondary data for specific workloads. Dedicated backup or email archiving systems are one example. They do one thing really well, although you may well end up needing more than one of them to cope with different departmental silos. In any case, according to Gilks, they don’t handle all of the workloads you might want to apply to secondary data. Instead, you need different software for different things.

Another option is a middleware or integration platform that makes the data accessible at a lower level, for consumption by a variety of applications. These products allow architects to create mappings between different systems. They can program those mappings to extract, transform and filter data from one location before loading it into another.

Gilks still sees problems. “Even if it’s completely successful, I still have 11 copies of my data,” he says. “At best, middleware is an effective Band-Aid.”

Ideally, he says, you’d want to consolidate those 11 copies down to a smaller number, whittling away those that weren’t there purely for security and compliance reasons.

“I’d probably want two or three resilient copies at most,” he continues. “I might think about a copy for my primary data centre, a copy for my secondary data centre, and a copy for the cloud.”

Some companies have had success with hyperconvergence in their primary systems. This approach simplifies IT infrastructure by merging computing, storage, and networking components. It uses software-defined management to coordinate commodity compute, storage and network components in single nodes that scale out.

Squeezing data into a collapsed compute-storage-network fabric has its pros and cons. While the hyper converged kit has no internal silos, it might become its own silo, presenting barriers to the non-hyperconverged infrastructure in the rest of the server room.

Hyperconverged infrastructure also often needs you to scale compute and storage together, and it is typically difficult for non-virtualised legacy applications to access the virtual storage on these boxes.

Perhaps most importantly in this context, you’re unlikely to store the bulk of your secondary data on these systems, especially the archived stuff.

Cohesity applied the hyperconvergence approach to secondary data management, using software-defined nodes that can run on hardware appliances, or on virtualised machines on customer premises or in the cloud. It slurps data and then deduplicates, compresses and encrypts it to produce a smaller, more efficient dataset.

The company then offers scale-out storage for access via standard interfaces including NFS, SMB and S3, and provides a range of services via the platform ranging from anti-ransomware and backup/recovery through to an analytics workbench and App Marketplace.

Whichever approach you choose, fighting the multi-headed data beast now will save you budgetary woes late and free your IT department up to be more sprightly in future developments. If you can’t entirely slay the monster, then at least try to tame it a little.

Sponsored by Cohesity

Sponsored: Becoming a Pragmatic Security Leader

Facebook is not going to Like this: Brit watchdog proposes crackdown on hoovering up kids’ info

In the UK, it seems, someone is trying to think of the children

Analysis The famous “Like” button may be on the way out if a new code for social media companies, published by the UK’s Information Commissioner’s Office (ICO), has its way.

Among the 16 rules in the consultation document [PDF] is a proposed ban on the use of so-called “nudge techniques” – where user interfaces and software are specifically designed to encourage frequent, daily use – as well as gather information that can then be sold on.

“Do not use nudge techniques to lead or encourage children to provide unnecessary personal data, weaken or turn off their privacy protections, or extend their use,” the code states, among a range of other measures that social media giants like Facebook, Twitter and Snapchat are going to hate.

The code is specifically all about the children, with the head of the ICO, Elizabeth Denham saying in a statement: “We shouldn’t have to prevent our children from being able to use [the internet], but we must demand that they are protected when they do. This code does that.”

Many of the changes would require companies like Facebook to make adjustments to their software and back-end systems to work. And some would directly impact social media companies’ bottom line as they would cut off access to vast quantities of personal information, which the companies repackage and sell to advertisers.

The code is just the latest push by the UK government – also reflected across Europe – to bring social media companies in line with what have been long-established norms and make them more responsible for removing damaging and illegal content, as well as limit the amount of personal data they compile.

Big push

It comes a week after the UK government published a White Paper on “Online harms” that argued for new, restrictive laws on social media and amid a global sense among lawmakers that the era of self-regulation in the internet space is over. The code has also been published just before a new law that requires adult content websites to verify the age of UK consumers before providing them with access to their material comes into effect.

One of the key drivers for the new code, film director and children’s rights campaigner Baroness Beeban Kidron said in a statement: “For too long we have failed to recognize children’s rights and needs online, with tragic outcomes.”

She went on: “I firmly believe in the power of technology to transform lives, be a force for good and rise to the challenge of promoting the rights and safety of our children. But in order to fulfill that role it must consider the best interests of children, not simply its own commercial interests.”

Some of the rules are general to the point of vagueness – such as the first requirement that a social media company make “the best interests of the child a primary consideration.”

But others are firm and threaten to have a significant impact on not just the design but also the business model used by such companies. The code makes it plain that unless the companies enact age-verification systems, the UK government expects them to extend all the changes to all users, regardless of age.

One key change is for default settings to be set to “high privacy” – something that Facebook famously gets around by constantly changing its own systems and forcing users to rediscover and reapply content controls. A high-privacy default would significantly limit the amount of personal information that can be automatically gathered through such a service.

Another is the key concept of “data minimization” – which is present in Europe’s GDPR data privacy legislation – where companies are expected to only gather the information they need to provide their service and no more.

And the code says that location tracking should be turned off by default and there should be “an obvious sign” if it is turned on. It also says that making user location visible to others “must default back to off at the end of each session.”

Clear and concise? What madness is this?

And in a clear poke in the eye to Facebook, the code insists that user are provided with “‘bite-sized’ explanations about how you use personal data at the point that use is activated” and those explanation be “concise, prominent and in clear language suited to the age of the child.”

The code is quite clearly aimed at banning all the questionable practices that social media companies have introduced in order to gain access to as much personal data as possible, and uses the fact that different laws exist around the protection of children and their information to push the changes.

Man blasted with noise from speaker

Turn me up some: Smart speaker outfit Sonos blasted in complaint to UK privacy watchdog

READ MORE

Somewhat predictably, those companies are not happy although they are currently treading a diplomatic line – in public at least. In its response [PDF] to the ICO’s initial call for feedback, Facebook made it plain that it was not happy with the direction they were going and basically claimed that it was already doing enough.

It even strongly implied that the regulator was patronizing kids by insisting on such controls when “we know that teenagers are some of the most safety and privacy users of the internet.” It adds that, “age is an imperfect measure of maturity” and the proposals risk “dumbing down” controls for children “who are often highly capable in using digital services.”

And in a word-perfect summary of Facebook and its culture, the organization notes that when it comes to its systems “the design journey is never over” and that it is “highly committed to improving people’s experience of its own services.”

The code is out for public review and comment until May 31. ®

Sponsored: Becoming a Pragmatic Security Leader

US-Cert alert! Thanks to a massive bug, VPN now stands for “Vigorously Pwned Nodes”

Multiple providers leaving storage cookies up for grabs

The US-Cert is raising alarms following the disclosure of a serious vulnerability in multiple VPN services.

A warning from the DHS cyber security team references the CMU Cert Coordination Center’s bulletin on the failure of some VPN providers to encrypt the cookie files they place onto the machines of customers.

Ideally, a VPN service would encrypt the session cookies that are created when a user logs in to access the secure traffic service, thus keeping them away from the prying eyes of malware or network attacks. According to the alert, however, sometimes those keys were being kept unencrypted, either in memory or on log files, allowing them to be freely copied and re-used.

Finding bugs in code

From directory traversal to direct travesty: Crash, hijack, siphon off this TP-Link VPN box via classic exploitable bugs

READ MORE

“If an attacker has persistent access to a VPN user’s endpoint or exfiltrates the cookie using other methods, they can replay the session and bypass other authentication methods,” the post explains. “An attacker would then have access to the same applications that the user does through their VPN session.”

To be clear, the vulnerable cookies are on the user’s end, not on the server itself. We’re not talking about a takeover of the VPN service, but rather an individual customer’s account. The malware would also need to know exactly where to look on the machine in order to get the cookies.

So far, vulnerable parties include Palo Alto Networks GlobalProtect Agent 4.1.0 for Windows and GlobalProtect Agent 4.1.10 and earlier for macOS, Pulse Secure Connect Secure prior to 8.1R14, 8.2, 8.3R6, and 9.0R2, and Cisco AnyConnect 4.7.x and prior. Palo Alto has already released a patch.

Check Point and pfSense, meanwhile, have confirmed they do encrypt the cookies in question.

Possibly dozens more vendors are going to be added to the list, however, as this practice is believed to be widespread. The site notes that over 200 apps have yet to confirm or deny that their session cookies are left unencrypted.

“It is likely that this configuration is generic to additional VPN applications,” the notice explains. ®

Sponsored: Becoming a Pragmatic Security Leader

Pregnancy and parenting club Bounty fined £400,000 for shady data sharing practices

ICO says case involving 34.4 million records ‘unprecedented’

Updated The Information Commissioner’s Office has fined commercial pregnancy and parenting club Bounty some £400,000 for illegally sharing personal details of more than 14 million people.

The organisation, which dishes out advice to expectant and inexperienced parents, has faced criticism over the tactics it uses to sign up new members and was the subject of a campaign to boot its reps from maternity wards.

Now Bounty’s data protection practices have fallen under the gaze of the ICO: a probe found it collated personal information to generate membership registration, via its website, mobile app, merchandise pack claim cards and from new mums at hospital bedsides. Nothing new there.

But the business had also worked as a data brokering service until April last year, distributing data to third parties to then pester unsuspecting folk with electronic direct marketing. By sharing this information and not being transparent about its uses while it was extracting the stuff, Bounty broke the Data Protection Act 1998.

Bounty shared roughly 34.4 million records from June 2017 to April 2018 with credit reference and marketing agencies. Acxiom, Equifax, Indicia and Sky were the four biggest of the 39 companies that Bounty told the ICO it sold stuff to.

This data included details of new mother and mothers-to-be but also of very young children’s birth dates and their gender.

“The number of personal records and people affected in this case in unprecedented in the history of the ICO’s investigations into data brokering industry and organisations linked to this,” said the ICO’s director of investigations, Steve Eckersley.

He claimed Bounty was “not transparent” to the millions of people whose data it sold, saying the consent given by people was “clearly not informed”, and Bounty’s action were “motivated by financial gain given that data sharing was an integral part of their business model at the time”.

“Such careless data sharing is likely to have caused distress to many people since they did not know that their personal information was being shared. multiple times with so many organisations, including information about their pregnancy status and their children,” Eckersley added.

Updated 12 April at 14.37BST.

Bounty managing director Jim Kelleher, has sent us a statement:

“In the past we did not take a broad enough view of our responsibilities and as a result our data-sharing processes, specifically with regards to transparency, were not robust enough. This was not of the standard expected of us. However, the ICO has recognised that these are historical issues.”

He said the business overhauled internal processes a year ago “reducing the number of personal records we retain and for how long we keep them, ending relationships with the small number of data brokerage companies with whom we previously worked and implementing robust GDPR training for our staff.”

Of course, if the data sharing had been done since 25 May 2018, Bounty would be facing a far greater fine, up to 4 per cent of annual turnover or €20m, whichever is greater. ®

Sponsored: Becoming a Pragmatic Security Leader

US: We’ll pull security co-operation if you lot buy from Huawei

America tries to keep the pressure on Chinese biz

A US official has repeated his country’s threats against its allies over Huawei – stating that the US’s goal is a process that leads “inevitably to the banning” of the Chinese company’s products.

“We have encouraged countries to adopt risk-based security frameworks,” said Robert Strayer, speaking on a call with the world’s press on Wednesday, expressing the hope that such frameworks would “lead inevitably” to bans on Huawei.

Strayer, who is the American foreign ministry’s deputy assistant secretary for Cyber and International Communications and Information Policy, told journalists that his country may withdraw some co-operations with its allies on security matters if they install Huawei equipment on internet and phone networks.

“The most fundamental security standard, really, is that you cannot have this extrajudicial, non-rule of law-compliant process where a government can tell its companies to do something,” Strayer told the Bloomberg newswire. This appears to be a reference to China’s National Intelligence Law, which forces companies to co-operate with the nation’s spy agencies, which in substance is no different from Western laws mandating the same thing.

The US’s main fear appears to be that China will soon be in a position to exercise the same sort of global surveillance that the US does through its dominance of the worldwide tech sector, challenging American hegemony.

Bloomberg also reported that the French parliament is considering a bill that would, in effect, replicate Britain’s Huawei Cyber Security Evaluation Centre part-run by spies from eavesdropping agency GCHQ. HCSEC, also known as The Cell, inspects Huawei source code for evidence of state backdoors. The Chinese company has come under increasing fire from the British state for the pisspoor state of its software development processes.

America’s allies have varied in their responses to the country’s call for a ban. Australia, its closest Pacific ally, has enthusiastically taken up the cudgel. Germany, meanwhile, has pointedly chosen to do its own thing, taking the EU along with it on that path.

When is a phone not a phone? When it’s an Android security key

Google Cloud product deluge spans security, analytics and AI

People with suitably modern Android phones can now use their handsets as a hardware security key to safeguard both their Google Accounts and Google Cloud accounts.

The ads and compute-time rental biz announced the change at Google Cloud Next ’19 in San Francisco, in conjunction with some hand waving about a variety of other security tools tied to the Google Cloud Platform.

“We’re essentially allowing multifactor authentication using your Android device as a security key, so you don’t need a separate device,” said Jennifer Lin, director of security for Google Cloud, at a press briefing on Tuesday.

Android phones can now serve as the second factor in two-factor authentication, where the first factor is something you know – a password – and the second is something you have – a hardware security key or apps that generate codes.

To turn their devices into key conveyors, Google account holders need an Android 7.0+ phone, with Bluetooth active, and a Bluetooth-enabled ChromeOS, macOS or Windows 10 computer running a Chrome browser. Google also recommends having a second hardware security key as a backup, in case one gets lost, stolen or unexpectedly smashed to bits in a fit of rage.

Google has taken to referring to this as two-step verification, which is one element of the company’s Advanced Protection program for those at risk of being targeted by hackers. The Advanced Protection program relies on two-step verification with a physical security key instead of a code generated by an authenticator app or delivered to a device via SMS or email. In addition, it limits access to data by apps and imposes additional account recovery challenges.

The Chocolate Factory also unveiled several other security focused initiatives for the Google Cloud Platform. Access Transparency, now available for G Suite Enterprise, provides “near real-time logs” when Google Cloud Platform administrators interact with G Suite data, because companies want to know such things for compliance and auditing. There’s also Access Approval, introduced in December, to grant permission for Google workers to access GCP data.

GCP’s Data Loss Prevention console has entered beta status, offering a way to find and redact sensitive data. The Cloud Security Command Center, which debuted last year, has matured to general availability. It provides security and risk management capabilities for various GCP services.

Google also launched early versions of several threat identification services: Event Threat Detection, a log scanner, entered beta; Security Health Analytics, a scanner for open storage buckets, ports, and stale keys, among other things, entered alpha.

Cloud Security Scanner, which looks for cross-site scripting clear-text passwords, and vulnerable code libraries in GCP apps, hit general availability for App Engine and beta for Google Kubernetes Engine (GKE) and Compute Engine; and GCP Marketplace added security vendor integrations from the likes of Capsule8, Cavirin, Chef, McAfee, Redlock, Stackrox, Tenable.io, and Twistlock.

OK, OK, we get it – it’s reasonably secure

Looking beyond security, Alphabet’s main money maker announced Cloud SQL for Microsoft SQL Server, a fully managed version of Microsoft SQL Server on GCP. This is in addition to self-service SQL Server deployment on Google Compute Engine, via an existing Microsoft license or one resold through Google.

Meanwhile, Google Cloud now has a speciality shop, Google Cloud for Retail, not to mention new partnerships with Accenture and Deloitte to help enterprises integrate Googly tech.

The biz teased various storage developments including “a new class of storage for data that’s ice cold,” which is to say an archive class for Cloud Storage that can’t be called Glacier or Deep Glacier because AWS got to those names first. Coming later this year, it will be available as an alternative to tape storage, at $0.0012 per GB per month ($1.23 per TB per month).

GCP’s data analytics offerings received attention with a slew of data migration, business intelligence, prediction and governance refinements. Among the more interesting is Cloud Data Fusion (beta), a managed data integration service for fetching data from various sources, combining everything and handing the wadge off to BigQuery for analysis. Also, Google made Sheets more interesting with connected sheets, a way to make its online spreadsheet serve as a front-end for BigQuery data sets.

Unavoidably, there was much fuss made over AI-oriented services, which pretty much every tech company today talks about ad nauseam. Google launched the beta version of an integrated AI platform, unexpectedly called AI Platform, that aims to help companies set up, build, run and manage machine learning projects.

It’s intended to complement the company’s existing AI Hub, which is more of a repository for AI components. No mention was made of AI Shoppe, AI Shack or AI Sluice, but perhaps next year.

Google Cloud Next '19

Oops! Almost a year in and ICO staff haven’t been handed a GDPR privacy notice yet

Data watchdog: All our staffers are ‘aware’ of policies…

The UK’s data protection regulator has failed to follow its own advice, admitting a privacy notice for its own staffers – one of its key recommendations for GDPR compliance – remains “under construction”.

As part of the General Data Protection Regulation, individuals have the “right to be informed”, which means they should be told what personal data organisations process and why.

Guidance issued by the Information Commissioner’s Office on this states: “Individuals have the right to be informed about the collection and use of their personal data. This is a key transparency requirement under the GDPR.”

A man holding his face in his hands and rolling his eyes

UK’s info commish is having a howler: Site dies amid ‘plagiarised’ GDPR book scandal

READ MORE

A key part of this advice (PDF) to ensure organisations are compliant is that they should provide a privacy notice that sets out, among other things, the lawful basis and purposes for data processing.

However, the ICO appears not to have eaten its own dog food as it is still drafting a privacy notice for employees, almost a year after GDPR came into force.

In a 5 April response to a Freedom of Information request – published on WhatDoTheyKnow – asking for a copy of the privacy notice containing information about the use of personal data of staff, the body said:

“I can confirm we do not currently hold the information you have requested. The privacy notice for ICO employees is currently under construction.”

Jon Baines, a data protection advisor at Mischon de Reya, said on his personal blog that he was “well-and-truly-gobsmacked” at the admission the ICO hasn’t “prepared, let alone given, its own staff a GDPR privacy notice”.

Baines noted the ICO’s own guidance states that getting the right to be informed wrong “can leave you open to fines and lead to reputational damage”.

However, the ICO’s PR arm today played down the admission, saying that staff had been “made aware” of its personal data processing policies.

A spokeswoman told The Reg the ICO had “developed” a policy, but that this had had to be “updated” due to an increase in staff.

“The ICO workforce has increased by 40 per cent in the last 12 months and this has led to multiple updates to our employee policies and procedures, which in turn need to be reflected in our employee Privacy Notice,” a spokeswoman said.

“All ICO employees have been made aware of the policies and procedures which cover our processing of personal information as an employer.”

She said a “finalised” version of the employee privacy notice would be published on its website “in the coming days”. ®

Sponsored: Becoming a Pragmatic Security Leader

Chinese hackers poke the Bayer, but German giant says it withstood attack

Pharmaceutical brand says no data lost in Winnti outbreak

German pharmaceuticals giant Bayer says it has been hit by malware, possibly from China, but that none of its intellectual property has been accessed.

On Thursday the aspirin-flingers issued a statement confirming a report from Reuters that the Winnti malware, a spyware tool associated with Chinese hacking groups, had been detected on some of its machines.

The malware was spotted on Bayer PCs in early 2018, with the company silently monitoring its behavior for more than a year before finally pulling the plug on the operation last month and notifying authorities.

“Our Cyber Defense Center detected indications of Winnti infections at the beginning of 2018 and initiated comprehensive analyses,” a Bayer spokesbod said in a statement to The Register.

“There is no evidence of data outflow. Our experts at the Cyber Defense Center have identified, analyzed and cleaned up the affected systems, working in close collaboration with the German Cyber Security Organization (DCSO) and the State Criminal Police Office of North Rhine-Westphalia. Investigations of the Public Prosecutor’s Office in Cologne are ongoing.”

Mar-a-lago

Mystery of the Chinese woman who allegedly tried to sneak into Trump’s Mar-a-Lago with a USB stick of malware

READ MORE

The Winnti malware, which allows hackers a backdoor into the infected machine, has long been used by China-based hacking groups looking to lift trade secrets and other vital corporate information from foreign companies.

Researchers have spotted the rogue code as far back as 2009 when Winnti was spotted ripping off digital certificates and source code from games developers.

The attack comes as researchers have warned of increases in hacking activities from Chinese groups looking to grab intellectual property on behalf of the government and local companies.

That Bayer would be targeted by hackers for its IP is hardly surprising. The German corporation, whose market cap is valued at more than $16bn thanks to the recent acquisition of agriculture kingpin Monsanto, is one of the world’s largest drugmakers and its network is host to highly valuable information on those products. ®

Sponsored: Becoming a Pragmatic Security Leader

Brit Police Federation cops to ransomware attack on HQ systems

Sort-of union for bobbies has triggered criminal investigation

The Police Federation of England and Wales (PFEW), a sort-of trade union for police workers, has been battling to contain a ransomware strike on the group’s computer systems, it confessed this afternoon.

In a statement posted on Twitter, PFEW said it first noticed the attack infecting its systems on Saturday 9 March, “with cyber experts rapidly reacting to isolate the malware to stop it spreading to branches”. It informed the ICO and the NCSC two days after the infection.

It added the attack “was not targeted specifically at PFEW and was more likely to have been part of a wider campaign”, saying that so far it reckons the malware had only affected the organisation’s Surrey HQ. It does not believe any data was extracted from its systems, reinforcing the notion that the incident could be down to run-of-the-mill ransomware.

“There is no evidence at this stage that any data was extracted from the organisation’s systems, although this cannot be discounted and PFEW are taking precautions to notify individuals who may potentially be affected,” said the association, which includes 120,000 constables, sergeants, inspectors and chief inspectors across 43 territorial forces.

The PFEW added in an FAQ: “A number of databases and systems were affected. Back up data has been deleted and data has been encrypted and became inaccessible. Email services were disabled and files were inaccessible.”

The federation tweeted: “As a precaution we are contacting individuals who are potentially affected, including our members, and will be providing them with further helpful information, including as to how they can make enquiries.”

Police workers reacted negatively to the news, with one posting on Twitter: “Why has it taken over 11 days to inform your members?”

The usual canned statement filled with apologies was also included in the customary statement, as was the insistence that PFEW took data security “very seriously” and had acted as soon as it was alerted to the malware.

BAE Systems’ Cyber Incident Response Division is the federation’s infosec firm. Perhaps unsurprisingly, police triggered a criminal investigation, having also involved GCHQ offshoot the National Cyber Security Centre and the National Crime Agency.

The federation carries out most of the functions of a trade union, inasmuch as it gives out advice to its members and engages with police managers on their behalf. However, there is one key difference: police constables are banned by law from going on strike. ®

Sponsored: Becoming a Pragmatic Security Leader

Public disgrace: 82% of EU govt websites stalked by Google adtech cookies – report

Plus: UK health service sites contain commercial trackers

All but three of the European Union member states’ government websites are littered with undisclosed adtech trackers from Google and other firms, with many piggy-backing on third-party scripts, according to an analysis of almost 200,000 webpages.

The report (PDF), published today by Cookiebot in collaboration with civil rights association European Digital Rights (EDRi), scanned 184,683 EU government webpages on 11 and 12 March to assess the cookies on each.

It found that there were 112 companies slurping up information on EU citizens’ browsing habits on the webpages of the governments supposedly fighting the good fight against excess stalking of netizens.

Adtech trackers were found on 25 of the 28 member states’ sites, with only Spain, Germany and the Netherlands clean of commercial cookies. There were 52 companies identified on France’s government sites, 27 on Latvia’s and 19 on Belgium’s. Twenty cookies were identified on GOV.UK, of which 12 were marketing, and all belonged to one company – Google.

Indeed, the search giant is described as the “kingpin of tracking” within the report, present on 82 per cent of all the sites and accounting for three of the top five trackers: YouTube, DoubleClick and Google.

The report authors said this was of “special concern” because Google can cross-reference trackers with its first-party account details via its widely used consumer services such as Mail, Search and Android apps.

Separately, the work assessed public health service sites, again finding that cookies were widespread, with 52 per cent of those tested having commercial trackers.

And again, Google was right up there, making up two of the top five, with the others being Adobe’s eversttech.net, AppNexus’ adnxs.com and Mediamath’s Mathtag.com.

For this assessment, the researchers chose six EU countries and carried out 15 health-related search queries – such as “How do I know if I have HIV?”, “Signs of being an alcoholic” and “I want to terminate my pregnancy” – from IP addresses in each country to identify the relevant landing pages on each nation’s health service.

In the UK, some 60 per cent of these landing pages had such ad trackers, less only than Irish sites, where trackers appeared on 73 per cent of landing pages. A single German website about maternity leave was monitored by 63 companies, while a French page about abortion was tracked by 21 firms.

The group said this could be used to “infer sensitive facts about [users’] health condition and life situation” and be resold to target ads. “These citizens have no clear way to prevent this leakage, understand where their data is sent, or to correct or delete the data,” it said.

The extent of tracking on these sites is even more alarming, the report argued, because they don’t rely on ad revenue. In some cases, governments will want to use companies’ services, but in others the firms gained access to these non-commercial sites through “free” third-party JavaScript tech services, like share buttons or plugins.

“These scripts can act as Trojan horses, opening backdoors to the website code through which ad tech companies can silently insert their trackers,” the report said.

It urged website owners to be more careful when including third-party components on their sites; to make sure they had a detailed overview of the current trackers; and to remove any unwanted ones from the source code.

Visitors should also be offered full transparency and control over trackers on the site – but it shouldn’t just be up to users to lock down their browsing habits. Stronger regulations need to be in force, and adhered to.

“How can any organisation live up to its [European General Data Protection Regulation] GDPR and ePrivacy obligations if it does not control unauthorised tracking actors accessing their website?” asked Cookiebot founder Daniel Johannsen.

“Public sector bodies now have the opportunity to lead by example – at a minimum by shutting down any digital rights infringements that they are facilitating on their own websites.”

Diego Naranjo at EDRi used the opportunity to lament the delay to the long-awaited ePrivacy Regulation, which was initially meant to be enforced as the yin to the GDPR’s yang, covering communications data rather than personal data.

However, it has been stuck in discussions between member states for more than a year, and privacy activists fear it is being watered down as a result of lobbying from adtech industry and concerns among member states.

If it does lose ground, Naranjo warned, it will “open a Pandora’s box of more and more sharing, merging and reselling of personal data in huge online commercial surveillance networks, in which citizens are being unwittingly tracked and micro-targeted with commercial and political manipulation.”

Their calls for progress echo those made by the European Data Protection Board last week. The group – made up of the bloc’s data protection watchdogs and EU data protection supervisor – issued a statement urging legislators to “intensify efforts” to adopt it.

“The future ePrivacy Regulation should under no circumstance lower the level of protection offered by the current ePrivacy Directive and should complement the GDPR by providing additional strong guarantees for all types of electronic communications,” it said. ®

Sponsored: Becoming a Pragmatic Security Leader

Q&A: Crypto-guru Bruce Schneier on teaching tech to lawmakers, plus privacy failures – and a call to techies to act

‘Politicians are reluctant to disrupt the enormous wealth creation machine technology has turned out to be’

RSA Politicians are, by and large, clueless about technology, and it’s going to be up to engineers and other techies to rectify that, even if it means turning down big pay packets for a while.

This was the message computer security guru Bruce Schneier gave at last week’s RSA Conference in San Francisco, during a keynote address, and it appeared to strike a chord with listeners. Schneier pointed out that, for lawyers, doing pro bono work was expected and a route to career success. The same could be true for the technology industry, he opined.

We sat down with Schneier to have a chat after he had finished autographing copies of his latest book Click Here to Kill Everybody: Security and Survival in a Hyper-connected World, to go over the ideas in more detail, and to get his views on where governments are going to take us in the future. Below, our questions are in bold, and Schneier’s responses are not.

Q. Your RSAC keynote highlighted the growing mismatch between public policy and technological development. Why are lawmakers having such problems with the technology sector?

A. Tech is new. Tech is specialized and hard to understand. Tech moves fast, and is constantly changing. All of that serves to make the tech sector difficult to legislate. And legislators don’t have the expertise on staff to counter industry statements or positions. On top of that, tech is incredibly valuable.

Lawmakers are reluctant to disrupt the enormous wealth creation machine that technology has turned out to be. They’re more likely to acquiesce to the industry’s demands to leave them alone and unregulated, to innovate as they see fit.

And finally, some of the very features we might expect government to regulate – such as the rampant surveillance capitalism that has companies collecting so much of our data in order to manipulate us into buying products from their advertisers – are ones that they themselves use when election season rolls around.

Q. With technology evolving so rapidly, can any government hope to keep up on a legislative level? Or are there core values in law that can be applied?

A. Technology has reached the point where it moves faster than policy. A hundred years ago, someone could invent the telephone and give legislators and courts decades to work out the laws affecting it before the devices became pervasive.

Today, technology moves much faster. Drones, for example, became common faster than our legislators could react to their possibility. Our only hope is to either write laws that are technologically invariant, or write broad laws and leave it to the various government agencies to work out the details.

Q. You’ve called for public-interest technologists to help bridge the impasse between policy and government. How would that work exactly?

A. We need technologists in all aspects of policy: at government agencies, on legislative staffs, working with the courts, in non-government organizations, as part of the press. We need technologists to understand policy, and to help – and in some cases become – policymakers. We need this because we will never get sensible tech policy if those in charge of policy don’t understand the tech.

There are many ways to do this. Some technologists will go into policy full time. Some will do it as a sabbatical in their otherwise more conventional career. Some will do it part time on their own, or part time as part of the “personal projects” some companies allow them to have.

Q. Why would tech companies go for this? What’s in it for them?

A. Largely, the tech companies won’t go for it. The last thing they want are smart legislators, judges, and regulators. They would rather be able to spin their own stories unopposed. But I don’t need the tech companies do to anything; this is a call to tech employees.

And technologists need to understand how much power they actually have. Even the large tech monopolies that don’t compete with any other company – that treat their users as commodities to be sold – compete with each other for talent.

As employees, technologists wield enormous power. They can force the companies they work for to abandon lucrative US military contracts, or efforts to assist with censorship in China. If employees start to routinely demand the companies they work for behave more morally, the change would be both swift and dramatic.

But in the end, tech companies will value the policy experience of people who have done a tour in a government agency, or worked on a government panel. It makes them more rounded. It gives them a perspective their peers will lack.

Q. And what about the concern that this could turn into a lobbying effort by the tech sector? Is there a way to keep this honest?

A. The tech sector is already lobbying. This is the way to keep them honest, by having tech experts on the other side.

Q. The EU has instituted GDPR and the first effects are being felt. What effect do you think that’ll have globally?

A. It’s interesting to watch the global effects of GDPR. Because software tends to be write-once-sell-everywhere, it’s often easier to comply with regulations globally than it is to differentiate.

We see this most obviously in security regulations. Last year, California passed an IoT security law that, among other things, prohibits default passwords. When that law comes into force in 2020, companies won’t maintain two version of their products: one for California and another for everyone else. They’ll update their software, and make that more secure version available globally.

Similarly, we’re already seeing many companies implement GDPR globally because it’s just easier to do that than it is to figure out who is an EU person and thus subject to the constraints of that law. The lesson is that restrictive laws in any reasonably large market are likely to have effects worldwide.

Q. Do you think the US will implement similar laws federally, or are we looking at a state-by-state basis?

A. We’re seeing two opposing trends in the US. The first is at the state level. Legislators, frustrated by the inaction in Congress, are starting to enact state privacy and security laws. California passed a comprehensive privacy law in 2018. Vermont took the first steps to regulate data brokers. New York is trying to regulate cryptocurrencies. Massachusetts and other states are also working on these issues. These are all important efforts, for the reasons I outlined above.

The other trend is that the big tech companies are starting to push for a mediocre federal privacy law that would preempt all state laws. This would be a major setback for security and privacy, of course, and I expect it to be one of the big battlegrounds in 2020.

Q. Globally, is this going to fracture or is there a broad consensus to be reached?

It’s already fracturing in three broad pieces. There’s the EU, which is the current regulatory superpower. There are totalitarian countries like China and Russia, which are using the Internet for social control.

And there’s the US, which is allowing the tech companies to create whatever world they find the most profitable. All are exporting their visions to receptive countries.

To me, the question is how severe this fracturing will be. ®

Sponsored: Becoming a Pragmatic Security Leader

Public spending watchdog snipes at UK.gov’s £1.3bn infosec plan – but broadly nods it through

Less hiding behind ‘national security’ to hush up failures, please

Britain’s Cabinet Office (CO) hasn’t quite bungled the National Cyber Security Programme (NCSP) but it could certainly be doing things a lot better, the National Audit Office said today.

The NCSP is owned by the CO and is the government’s master plan for securing Blighty against ne’er-do-wells and hostile foreign states alike trying to hack and take down critical national infrastructure.

It is a £1.3bn taxpayer funded programme, whose costs were originally pegged at £860m. Other government departments bid for a slice of that cash and spend it on their own infosec initiatives, under the Cabinet Office’s watchful eye.

“Lead departments are largely on track to deliver against their objectives, although funding for the remainder of the Programme is below the recommended level,” said the National Audit Office (NAO) this morning. It added that the CO had not properly planned how it would spend the cash when it originally secured the NCSP’s funding from the Treasury:

“The government used the Strategic Defence and Security Review and Spending Review in 2015 to establish the overall direction of cyber security expenditure and approve individual project business cases. However, when HM Treasury set the funding in 2015 the Department did not produce an overall Programme business case to systematically set out the requirement and bid for the appropriate resources.”

Of the £1.3bn total fund for the NCSP, £100m was added in a loan from the Treasury after the NCSP got under way, while £69m was cut and reallocated to anti-terror work. The NAO acidly commented:

Although these activities contributed to enhancing cyber and wider national security they were not originally intended to be funded by the Programme, and this delayed work on projects such as elements of work to understand the cyber threat.

One of its big successes, according to the NAO, was the creation of the National Cyber Security Centre in 2016, an offshoot of spy agency GCHQ. The NCSC was instrumental in helping the NHS clean up in the aftermath of the Wannacry malware outbreak of 2017.

The Cabinet Office told El Reg it was proud of what it had done so far, quietly glossing over the criticisms of its financial management of the NCSP.

“The UK is safer since the launch of our cyber strategy in 2015. We have set up the world leading National Cyber Security Centre, taken down 140,000 scam websites in the last year, and across government have helped over a million organisations become more secure,” a spokeswoman said. “We recognise that there is always more to do, and are pleased that the NAO has endorsed our plans for the future through their recommendations.”

Ominously, the NAO said: “The Department has ‘low confidence’ in the evidence supporting half of the Strategy’s strategic outcomes, and currently only expects to achieve one by 2021.”

It also added that it had been gagged from telling the public why the Cabinet Office won’t meet its own targets: “For security reasons we cannot report progress against any further strategic outcomes.”

The full report is on the NAO website. ®

Sponsored: Becoming a Pragmatic Security Leader

Year 1 of GDPR: Over 200,000 cases reported, firms fined €56 meeelli… Oh, that’s mostly Google

2019 just a transition year, says French watchdog

European data protection agencies have issued fines totalling €56m for GDPR breaches since it was enforced last May, from more than 200,000 reported cases – but watchdogs have said they’re just warming up.

An assessment from the European Data Protection Board (EDPB), which is made up of regulators across the region, found that, in the first nine months, there were 206,326 cases reported under the new law from the supervisory authorities in the 31 countries in the European Economic Area.

Vivienne Artz, chief privacy officer of market data purveyor Refinitiv, cited the report (PDF), published at the end of February, at a panel event assessing the first year of GDPR at a data protection conference in London this week run by the International Association of Privacy Professionals.

About 65,000 were initiated on the basis of a data breach report by a data controller, while about 95,000 were complaints. Some 52 per cent of the overall cases have already been closed, with 1 per cent facing a challenge in national courts.

Artz said that the total fines came to €55.96m – which she observed seemed like a lot before you realise that almost all of it comes from French data watchdog CNIL’s €50m fine for Google.

Indeed, the figure emphasises the size of CNIL’s fine – which was the first it had handed out under GDPR – and the body’s director of the rights protection and sanctions directorate, Mathias Moulin, was on the panel to set out its reasoning.

He said the breach was “massive and highly intrusive”, and that the fine had been based on five factors. These included the type of violation, its scale – it was continuous, rather than a one-off, and affected lots of people and massive amounts of data – and the size of the company.

But given the huge range of potential fines – which has risen from “up to £500,000” (in the UK) to “up to” €20m or 4 per cent of annual turnover – the EDPB has also tasked data protection agencies with “harmonising” their approaches.

At the event, Stephen Eckersley from the UK Information Commissioner’s Office revealed that his organisation was working with the data protection agencies in the Netherlands and Norway to establish a “matrix” for calculating fines. This won’t be public-facing, he said, but will instead be a “toolkit” for watchdogs.

As for the ICO’s enforcement actions, he said that there were some GDPR cases in progress, but that the past year had been mostly focused on legacy investigations, with fines handed to Uber, Facebook and Equifax.

Even CNIL’s Moulin, said that last year “should be considered a transition year” for GDPR, as national regulators had to focus on finalising their rules and approaches, and spent most of their time tying up probes under the previous regime.

One thing that did change immediately under GDPR, if not the fines, was the number of incident reports. This was particularly so for companies turning themselves in over data breaches.

Eckersley said there was a “massive increase” in reports of data breaches in the first month at 1,700. This has levelled out a little, but there are still about 400 coming in a month. Overall, he expects the total to reach about 36,000 this year – up from 18,000 to 20,000 previously.

In order to deal with the increased demand – and organisations’ propensity to report “just in case” – the ICO has set up a dedicated team for personal data breaches, so data controllers have a single point of contact to help them assess whether to make a formal notification.

The panel also noted that, while data breaches are more likely to hit the headlines, there are many more complaints coming in about other aspects of privacy regulations. For instance, Eckersley said that about half of the complaints relate to the way subject access requests have been handled. ®

Sponsored: Becoming a Pragmatic Security Leader

Reg webinar: Tune in for some knowledge on how to become an effective leader in IT security

The benefits of pragmatism

Promo With companies of all sizes anxious to protect themselves from the growing danger of cyberattacks, what does it take to reach a leading role in the security field?

Tune in to our webinar on Thursday 21 March at 17:00 UTC to hear Scott King, an experienced systems engineer and former chief information security officer at Boston-based security firm Rapid 7, share the wide-ranging knowledge he has gained over his long career in IT security.

People can take different paths to a top position in security: some go directly from analyst to leadership, others have a more technical background in general IT, or excellent tactical skills acquired in a consultancy or vendor role.

You need a more pragmatic approach when communicating with business leaders about security risks and how to deal with incidents than the easy one you might take when talking to technical teams. How do you make the mental shift?

Scott offers valuable hints and tips on how to balance strategy and tactics, how to deliver at every stage, and how to understand the benefits of pragmatism in your security role.

Register to sign up here.

Sponsored: Becoming a Pragmatic Security Leader

UK Ministry of Fun seeks deputy director for IT as it edges away from Cabinet Office shared services

DCMS wants to quintuple in-house IT staffers… that’ll take it to 10

The Ministry of Fun is creating a position for a deputy director of IT to expand the department’s internal tech team and wean itself off the Cabinet Office’s shared services programme.

At present, the Department for Digital, Culture, Media and Sport’s organisation of techies is comprised of just two people, with delivery and development carried out by the Cabinet Office.

But DCMS as a whole has increased threefold in the past four years, now numbering about 1,200 staffers, and had digital slotted into its name back in 2017. It was then given responsibility for policymaking in digital and later – somewhat controversially – for data.

In a job ad for the newly created position, the department said it had made “good progress” towards “collaborative” and “agile” working while the Cabinet Office has been its IT delivery partner, but that it now plans to forge ahead alone.

“Due to the growth in the department and the increasing complexity of business needs, we are now building up the internal team to manage and develop IT with a range of delivery partners,” the ad for the role said.

The move is another signal that the Cabinet Office’s shared services plans have run out their usefulness for government departments; the programme had already faced implementation difficulties as departments said a centralised solution didn’t meet their specific needs.

DCMS said “a number” of unspecified IT services would continue to be delivered by the Cabinet Office – but emphasised the department was making a complete change to its business model for IT delivery. The new team will work with a “wider range” of “delivery partners” to establish services and infrastructure, it said.

The new job – offered at a salary of £68,000 to £80,000 – will be to overhaul the way DCMS delivers IT services, with it calling for “a programme of IT improvements that will transform the way we do business”.

DCMS said the successful candidate will work with new and existing IT providers and hire techies to swell the internal team to ten.

The deputy director for IT could also end up leading security at DCMS, although the ad states that responsibilities in this area are yet to be confirmed.

In addition to developing IT delivery for services and infrastructure within the department, the deputy director will be asked to consider how to develop leadership on IT functions across DCMS’ 45 arm’s-length bodies, which it said had been “limited” to date.

The job ad calls for someone with strong technical knowledge, excellent negotiation skills, experience managing IT operations in a complex multi-vendor environment, and “significant track record of owning end-to-end in-house technology service delivery”.

The org chart data at DCMS last updated in December 2018 does not list a director for IT; but it does have a director of digital and technology policy, a director of cyber security, and a director of digital infrastructure.

The closing date for applications is 27 March, with interviews to take place on 16 May. ®

Sponsored: Becoming a Pragmatic Security Leader

ICO, forgive me – it has been three weeks since I discovered my breach

Businesses slow to detect, report data leaks pre-GDPR

Businesses waited an average of three weeks after discovering a data breach to report it to the watchdog before GDPR came into force, with many waiting until the end of week to ‘fess up.

According to an analysis of the 181 data breach reports submitted to the Information Commissioner’s Office in the year ended 5 April 2018, it took companies 60 days to realise that they had suffered a data breach.

One company took 1,320 days – among 14 that didn’t notice for more than 100 days that their systems had been compromised. When broken down by sector, financial services and legal firms were quicker to report breaches to the ICO, averaging 16 and 20 days, respectively.

Businesses tended to take on average 21 days to report the breach after they had identified it.

One firm didn’t tell the watchdog for 142 days – about 47 times longer than required under the GDPR, which states that breaches that pose a risk to the rights and freedoms of individuals must be reported within 72 hours.

Another took 374 days, which – given that it was reported on 23 November 2017 – looks suspiciously like Uber as its breach hit the headlines the day before when the company ‘fessed up in the States.

The data, released under Freedom of Information laws, showed that nearly half of all breaches (87) were reported to the ICO on a Thursday or Friday.

Pen-testing firm Redscan, which requested the data, reckoned that the preference for end-of-week submissions could have been to head off negative PR.

“This might be overly cynical but I suspect that in many cases, breach disclosure on these days may have a deliberate tactic to minimise negative publicity,” said cybersecurity director Mark Nicholls.

The FoIs also show that 91 per cent of reports didn’t include crucial information, like the impact of the breach, the recovery process or dates.

Some 93 per cent didn’t say what the impact of the breach was, or said that they didn’t know. Meanwhile, 21 per cent didn’t report an incident date to the ICO, and 25 per cent failed to report the date they discovered the incident.

Saturday was the most common day not only for businesses to suffer a data breach, with more than a quarter happening on that day, but also for them to be discovered at about 30 per cent. ®

Sponsored: Becoming a Pragmatic Security Leader

FBI warns of SIM-swap scams, IBM finds holes in visitor software, 13-year-old girl charged over JavaScript prank…

Tired: Booth babes. Wired: Floof babes. Expired: Conference hall carpets

Roundup This week we had an NSA reverse-engineering toolkit released at the RSA Conference, a buffer bashed aboard British Airways, big trouble brewing for Citrix, plus much more.

Along the way, a few other things happened:

Alarms raised over IP cameras

A new Internet of Things botnet could be in the works, as security outfit GreyNoise says it has seen a major uptick in machines scanning the public internet for a specific debug port used by surveillance cameras. Presumably the boxes are looking for devices to hijack via this debugging interface:

If true, this would suggest a fresh attempt to infect net-connected cameras for use in an IoT botnet – like Mirai, the massive collection of infected IoT equipment that has menaced the internet in various forms for years.

If you do run an IP-enabled camera, you would be wise to check for and install any available firmware updates, or firewall off TCP port 9527 just to be on the safe side.

FBI warns of SIM-swapping outbreak

Holding a substantial amount of crypto-currency? You may want to take a close look at your multi-factor authentication settings on your online accounts, particularly your email, and protections on your cellphone plan.

The FBI is warning of what it says is an uptick in SIM-swapping fraud incidents. Criminals call a target’s phone carrier’s customer support, and, through blagging and social engineering, request that their mark’s mobile phone number be switched to a SIM card in a device belonging to the crooks.

Should the transfer work, the thieves then attempt to reset the password on the victim’s email account, using the two-factor authentication code sent to the mark’s phone number, which is directed to the crim’s handset. From there, the miscreants can reset the password on the victim’s cloud-based crypto-coin wallets, and drain it of digital dosh.

Either switch to physical hardware tokens to protect accounts, ideally, or authentication apps, and/or call your carrier and put SIM transfer protections on your plan.

“The FBI has seen an increase in the use of SIM swapping by criminals to steal digital currency using information found on social media,” said Special Agent John Bennett from the FBI San Francisco Division.

“This includes personally identifying information or details about the victim’s digital currency accounts.

“The FBI wants to help individuals make themselves harder targets and, if they are victimized, to quickly regain control of their accounts to mitigate any potential harm.”

In brief… If you’re wondering how some iOS jailbreakers and other infosec researchers crack certain parts of Apple’s iPhone security so fast when a new device comes out, it’s probably because they obtain prototypes of the hardware that have security measures disabled, allowing them to poke around the firmware for vulnerabilities…

Google temporarily switched off Android TV photo-sharing after a privacy-busting bug caused hundreds of strangers’ pictures started showing up in the “linked accounts” feature in people’s accounts…

Debt collectors and stalkers have been caught pretending to be cops to extra folks’ smartphone location data from telcos in the US…

Chelsea Manning was jailed on Friday for refusing to testify before US grand jury probing WikiLeaks and its document dumps. The military whistleblower, or diplomatic cables leaker, depending on where you stand, will remain behind bars until she changes her mind, or the jury completes its investigation…

Finally, vulnerability hunter Victor Gevers detailed 18 MongoDB databases he found facing the public internet that appear to be part of China’s social-media-monitoring system that’s not terribly unlike the NSA’s PRISM program, processing 364 million online profiles and their chats and file transfers daily.

Security MadLibs! Hackers can steal your medical records by exploiting your ultrasound scan

Thanks to the terrible state of IT security in various medical facilities, here’s yet another example of patient records being put at risk by obsolete devices.

Researchers at Check Point stumbled upon an ultraSound machine that could be compromised to steal patient medical data. See the vid before for more…

Youtube Video

In this case, Check Point says, the ultrasound machines use Windows 2000, an OS that is so outdated as to be trivial for an attacker who has infiltrated a hospital IT network to crack open. As the bug-hunters note, this is not just a privacy risk for the patients, but also a legal liability for the hospitals, who could be on the hook for heavy fines and lawsuits should they allow patient records to fall into the wrong hands.

Japanese teen charged for JavaScript loop prank

A 13-year-old girl in Japan has been charged with computer crimes after she allegedly copied and shared a JavaScript infinite loop script as a prank.

Reportedly, the unnamed young woman linked to the script on a message board, causing any one who followed the link to see an alert dialog box that automatically, on some browsers, respawned itself every time the user clicked the “OK” button.

Hardly the Stuxnet worm, but apparently it was serious enough for the police in Kariya to charge the teen with distributing malicious computer code.

IBM says hospitality kiosks are being lousy hosts when it comes to security

Researchers with IBM are warning that some of the automatic desktop reception systems used to process building guests are rife with bugs.

Big Blue’s Red Team found that a number of popular visitor management systems (things like automated guest registration for offices) contain some basic security holes, like default admin credentials, enabled breakout keys that opened the Windows desktop, and had data leakage bugs that would expose employee information.

This, says IBM, is particularly bad because these systems are, by design, left open to world + dog.

“Considering that these systems are intentionally physically exposed to outsiders and have a role in the security of an organization, they should be developed with security in mind throughout the product life cycle and should include physically present attackers in their threat model,” IBM says.

“However, our team has identified vulnerabilities in a number of visitor management system products that could prevent them from achieving that goal.”

Kittens and puppies put the “Awww!” in RSA Conference

Let’s face it, RSA Conference isn’t always a lot of fun. It’s crowded, the bathroom lines are long, the marketing bullshit is often turned up to 11, and this year the weather in its host city San Francisco was awful.

If you were lucky enough to wander over to one particular corner of the show, however, there were two booths that were sure to make your day a bit better, thanks to some furry friends in search of a home.

Two companies opted to supplement the usual crew of bored execs and chipper marketing folks with some shelter pets, of floof babes as we like to call them.

Tinfoil Security, a company specializing in security and vulnerability scanning tools for developer APIs, teamed up with the Humane Society of Silicon Valley to let convention-goers meet Grace and Hopper, a pair of foster-kittens picked because their easy-going and friendly nature left them unfazed by the hustle and bustle of the show floor.

Hopper the cat at RSA

Hopper, reflecting the mood of every RSA attendee by day 3

While ThreatQuotient, a vulnerability management and intelligence platform, brought in a handful of puppies from Finding a Best Friend Rescue to brighten everyone’s day. Those willing to use hand sanitizer and disinfecting spray were even able to get some quality snuggle time with the junior doggos.

Bruce the puppy at RSA

Cuddles with Bruce the pup: better than any booth swag

Playing with puppies and kittens was a nice respite from the expo floor and a great way for two of the smaller companies at RSA Conference to make themselves stand out, but more importantly, the two booths served as a reminder that there are many great cats and dogs looking for a home.

Hopefully a few attendees upon returning home will consider going over to their local shelter or rescue group and taking in a furry friend of their own. ®

Sponsored: Becoming a Pragmatic Security Leader

Liz Warren: I’ll smash up Amazon, Google, and Facebook – if you elect me to the White House

‘They’ve bulldozed rivals, used our private info for profit’ … yes, yes, but could she actually tackle giants as prez?

Analysis US presidential contender Elizabeth Warren has vowed that if elected she would break up Amazon, Google, and Facebook, accusing the internet giants of abusing their market power.

“Today’s big tech companies have too much power,” Senator Warren (D-MA) wrote in an essay published on Friday. “Too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

The high-profile Dem, best known for her fierce criticism of banks in the aftermath of the economic crisis, is outlining her main policy ideas in a crowded field of Democratic 2020 presidential hopefuls.

Although Washington DC has discussed action against large tech companies for several months in the aftermath of a slew of scandals, Warren is the first person to directly advocate for breaking them up.

She offers the antitrust actions taken against Microsoft in the 1990s as a template for breaking up the next generation of tech giants, and argues that Facebook, Google and Amazon in particular are using their dominance in one market to carry out anti-competitive actions in others.

Her approach would see tech platforms designated “platform utilities” through legislation and then any arms of existing companies that operate on top of those platforms could be broken away from the main utility.

Platform utilities would not be allowed to transfer or share data with third parties, and the designation would be applied to any company that has a global revenue of $25bn of more and offers a public marketplace, exchange or platform.

So, for example, Amazon would be allowed to continue to own and run its dominant ecommerce platform – which accounts for an extraordinary 50 per cent of ecommerce in the US. But would not be allowed to own its Amazon Basics and Amazon Marketplace products, which compete with others on the platform and, according to Warren, have a clear competitive advantage.

Likewise, Google Search would be designated a platform utility and so the company would be required to break off its Google ad exchange.

Mergers and acquisitions

A second related action would see mergers viewed as anti-competitive blocked and in some cases unwound. Warren lists some specific examples: Amazon and Whole Foods, Zappos; Facebook and WhatsApp, Instagram; Google and Waze, Nest and DoubleClick.

“I will appoint regulators who are committed to using existing tools to unwind anti-competitive mergers,” she wrote, arguing: “Unwinding these mergers will promote healthy competition in the market  -  which will put pressure on big tech companies to be more responsive to user concerns, including about privacy.”

And to make the actions stick, Warren proposed a European GDPR-style fine of five per cent of annual revenue (which is actually larger than the GDPR maximum of four per cent.)

It is a bold position – and one that marks Warren out from her competition. And, naturally, it has already picked up fierce opponents and defenders.

Think-tank the Competitive Enterprise Institute (CEI) called the idea a “doomed regulatory experiment” and claimed there were no current barriers to entry in the internet space.

Likewise, tech trade group NetChoice – which includes many tech giants amid its members – said that Warren’s ideas would “increase prices for consumers, make search and maps less useful, and raise costs to small businesses that advertise online.”

In defense of the plan there is Public Knowledge, which argued that “the time has come to engage in a serious debate about sector-specific regulation of digital platforms” and said legislation that focused on increasing competition on digital platforms was needed.

And internet advocacy group Demand Progress tweeted its support, arguing that “momentum is building to take back our democracy from the tech giants. This is a landmark proposal whose time has come.”

So… good idea, or not?

The two big questions surrounding Warren’s proposal are: is it needed? And is the best way to achieve it?

There is little doubt that something needs to be done to pull back what have been clear abuses of market power. Facebook has become a law unto itself and has repeatedly abused its users’ trust when it comes to sharing personal data, yet feels sufficiently empowered to continue its behavior, misleading lawmakers and users and in some cases outright lying about its actions.

Facebook CEO Mark Zuckerberg

Correction: Last month, we called Zuckerberg a moron. We apologize. In fact, he and Facebook are a fscking disgrace

READ MORE

The reason Facebook is able to run roughshod over the clear concerns of users and legislators is its near total dominance of social media: there is no easy alternative to its service for connecting and sharing information online.

Facebook is not like Google search or Amazon’s marketplace however: despite its determined efforts, Facebook is still not a place people go to buy products and so there is little chance for it to edge out third parties.

Warren claims that Facebook willingness to buy and takeover any company that threatens its dominance in its market – WhatsApp and Instagram being the obvious examples – has resulted in people simply not bothering to create possible alternatives to Facebook.

For some reason, she doesn’t mention Snapchat – a viable competitor that has seen its most original ideas copied almost instantly by Facebook as a way of blocking its growth. And while VCs used to throw money at startups in the hope they would one day get bought by Google or Facebook, that approach does appear to have ended – or at least slowed considerably. Facebook is now more likely to copy and squash a competitor than take it over.

If nothing else, the fact that Facebook has been shown to be a morally bankrupt company and yet users constantly opine that they don’t really see an alternative is a sign that the market is distorted.

But the question is whether Warren’s approach is the right one. Privacy legislation that obliges Facebook to not abuse its position could be just as effective and wouldn’t rely on the federal government deciding how markets should be structured.

Sponsored: Becoming a Pragmatic Security Leader

That marketing email database that exposed 809 million contact records? Maybe make that two-BILLION-plus

‘This is a gigantic amalgamation of data all in one place’ expert tells El Reg

An unprotected MongoDB database belonging to a marketing tech company exposed up to 809 million email addresses, phone numbers, business leads, and bits of personal information to the public internet, it emerged yesterday.

Today, however, it appears the scope of that security snafu was dramatically underestimated.

According to cyber security biz Dynarisk, there were four databases exposed to the internet – rather than just the one previously reported – bringing the total to more than two billion records weighing in at 196GB rather than 150GB. Anyone knowing where to look on the ‘net would have been able to spot and siphon off the data, without any authentication.

“There was one server that was exposed to the web,” explained Andrew Martin, CEO and founder of DynaRisk, in an email to The Register on Friday. “On this server were four databases. The original discovery analysed records from mainEmailDatabase. The additional three databases were hosted on the same server, which is no longer accessible.”

Martin said he believes the original analysis may have been conducted with limited time or computing power, which would explain the lesser number of records found. “Our analysis was conducted over all four databases and extracted over two billion email addresses which is more than the 809 million first discussed,” he said.

The databases were operated by Verifications.io, which provides enterprise email validation – a way for marketers to check that email addresses on their mailing lists are valid and active before firing off pitches. The Verifications.io website is currently inaccessible.

The database first reported included the following data fields, some of which, such as date of birth, qualify as personal information under various data laws:

Email Records (emailrecords): a JSON object with the keys id, zip, visit_date, phone, city, site_url, state, gender, email, user_ip, dob, firstname, lastname, done, and email_lower_sha265.

Email With Phone (emailWithPhone): No example provided but presumably a JSON object with the two named attributes.

Business Leads (businessLeads): a JSON object with the keys id, email, sic_code, naics_code, company_name, title, address, city, state, country, phone, fax, company_website, revenue, employees, industry, desc, sic_code_description, firstname, lastname, and email_lower_sha256.

The image below shows Verifications.io’s four MongoDB databases exposed to the internet, as identified by Dynarisk:

Image of exposed databases

Martin said the impact of the security blunder is less than it may be fear because there are no credit card numbers, medical records nor any other super-sensitive information involved.

“The issue here is this is a gigantic amalgamation of data all in one place,” he explained. “The leaking of this information may breach data protection regulations in various countries. The leak may also violate the privacy and security provisions between Verification.io and their clients within their contracts.”

Bob Diachenko, a security researcher for consultancy Security Discovery, found the first Verifications.io database online, and said the marketing tech biz, based in Tallinn, Estonia, acknowledged the gaffe and hid the data silos from public view after he flagged it up.

Verifications.io told Diachenko that its company database was “built with public information, not client data.” This suggests at least some of email addresses and other details in the company’s databases were downloaded or scraped from the internet.

Diachenko didn’t immediately respond to a request for comment.

bucket

Amazon tries to ruin infosec world’s fastest-growing cottage industry (finding data-spaffing S3 storage buckets)

READ MORE

Security researcher Troy Hunt, who maintains the HaveIBeenPwned database of email accounts that have been exposed in online data dumps, said about a third of the email addresses in the Verifications.io database are new to HaveIBeenPwned. The other two thirds presumably were culled from the same online sources that supplied Hunt’s archives.

Martin said Verification.io’s claim that its data came from public sources is open to interpretation. “These data sources might have been public at one time in the past and then not public at a later time,” he said. “It would be interesting to know if the company had a process of continuous compliance where they would validate if they were still allowed to store the data over time.”

Dtex, a security biz that focuses on the dangers of rogue or slipshod employees within businesses, said in its recent 2019 Insider Threat Intelligence Report that 98 per cent of incidents involving data left exposed in the cloud can be attributed to human error.

MongoDB versions prior to 2.6.0, released in 2014, were network accessible by default. Reversing that default setting hasn’t persuaded people to securely configure their MongoDB installations, though. Out of the box, MongoDB requires no authentication to access, a detail a lot of folks appear to overlook. ®

Sponsored: Becoming a Pragmatic Security Leader