Data cops order Ireland to delete 3.2m records after ID card wheeze ruled to be ‘unlawful’

Splash one for GDPR

Ireland’s Data Protection Commission (DPC) has ordered the country to delete 3.2 million people’s personal data after ruling that its national ID card scheme was “unlawful from a data-processing point of view”.

Speaking to the Irish Times, data protection commissioner Helen Dixon described the scheme as “unlawful” and has ordered Ireland’s Department of Social Protection to stop collecting and processing people’s personal data for the project.

Laws underpinning the ID card, Dixon said, had been misinterpreted by the Irish state to give it total freedom to do as it pleased with the data it hoovered up when that was not the case. In a statement about the Public Services Card, the DPC said: “In practical terms, a person’s capacity to access public services both offline and online is now contingent, in an ever-increasing range of contexts, on obtaining and producing a PSC [Public Services Card].”

The Republic of Ireland’s total population is around 4.8 million, meaning around three-quarters of the Emerald Isle’s inhabitants had signed up to the scheme. It was used for everything from “the issuing of driver’s licences or passports, to decisions to grant or suspend payments or benefits under the social protection code, to the filing of appeals against decisions about the provision of school transport.”

But the DPC found that the state was effectively acting as if data protection laws didn’t apply to it at all.

The Department’s blanket and indefinite retention of underlying documents and information provided by persons applying for a PSC contravenes Section 2(1)(c)(iv) of the [Irish] Data Protection Acts, 1988 and 2003 because such data is being retained for periods longer than is necessary for the purposes for which it was collected.

However, in the detail the DPC did admit that data collected by the Department of Social Protection could be used for its intended purpose – just not for other government departments.

“Ultimately, we were struck by the extent to which the scheme, as implemented in practice, is far-removed from its original concept,” thundered the DPC. “Instead, the card has been reduced to a limited form of photo-ID, for which alternative uses have then had to be found.”

The scrapping of the scheme has close parallels with UK attempts at a national ID card, an idea enthusiastically promoted by Tony Blair’s New Labour government of the 2000s which was instantly scrapped by the Conservative-Lib Dem coalition that came to power in 2010.

Concerningly, Conservative-leaning think tanks have forgotten the £300m wasted on UK ID cards and have begun reheating calls to bring them back as some kind of technological wand that will magically solve all government administration woes. ®

Sponsored: Balancing consumerization and corporate control

‘Deeply concerned’ UK privacy watchdog thrusts probe into King’s Cross face-recognizing snoop cam brouhaha

ICO wants to know if AI surveillance systems in central London are legal

The UK’s privacy watchdog last night launched a probe into the use of facial-recognition technology in the busy King’s Cross corner of central London.

It emerged earlier this week that hundreds of thousands of Britons passing through the 67-acre area were being secretly spied on by face-recognizing systems. King’s Cross includes Google’s UK HQ, Central Saint Martins college, shops and schools, as well as the bustling eponymous railway station.

“I remain deeply concerned about the growing use of facial recognition technology in public spaces, not only by law enforcement agencies but also increasingly by the private sector,” said Information Commissioner Elizabeth Denham in a statement on Thursday.

“We have launched an investigation following concerns reported in the media regarding the use of live facial recognition in the King’s Cross area of central London, which thousands of people pass through every day.”

The commissioner added her watchdog will look into whether the AI systems in use at King’s Cross are on the right side of Blighty’s data protection rules, and whether the law as a whole has kept up with the pace of change in surveillance technology. She highlighted that “scanning” people’s faces as they go about their daily business is a “potential threat to privacy that should concern us all. That is especially the case if it is done without people’s knowledge or understanding.”

Earlier this week, technology lawyer Neil Brown of decoded.legal told us that businesses must have a legal basis under GDPR to deploy the cameras as it involves the processing of personal data. And given the nature of encoding biometric data – someone’s face – a business must also have satisfied additional conditions for processing special category data.

“Put simply, any organisations wanting to use facial recognition technology must comply with the law – and they must do so in a fair, transparent and accountable way,” Denham continued.

“They must have documented how and why they believe their use of the technology is legal, proportionate and justified. We support keeping people safe but new technologies and new uses of sensitive personal data must always be balanced against people’s legal rights.”

This comes after London Mayor Sadiq Khan demanded more information on the use of the camera systems, and rights warriors at Liberty branded the deployment “a disturbing expansion of mass surveillance.”

Argent, the developer that installed the CCTV cameras, admitted it uses the tech, and insisted is there to “ensure public safety.” It is not exactly clear how or why the consortium is using facial recognition, though.

A Parliamentary body, the Science and Technology Select Committee, urged in mid-July for a “moratorium on the current use of facial recognition” tech, and “no further trials” until there is a legal framework in place. And privacy campaign groups have tried to disrupt police trials. The effectiveness of these tests have also proved dubious.

Early last month, researchers from the Human Rights, Big Data & Technology Project at the University of Essex Human Right Centre, found that using the creepy cams are likely illegal, and the success rates are highly dubious.

Use of the technology in the US has also been contentious, and on Wednesday this week, the American Civil Liberties Union said tests showed Amazon’s Recognition systems incorrectly matched one in five California politicians with images of 25,000 criminals held in a database. ®

Sponsored: Balancing consumerization and corporate control

You’re all set for your long summer vacation. Suddenly a text arrives. It’s the CEO. ‘Data strategy by Friday plz’

Fret not. Here’s a gentle guide to drawing up a plan to take the pain away from your info management

Go back 15 years and big data as a concept was only just beginning.

We were still shivering in an AI winter because people hadn’t yet figured out how to funnel large piles of data through fast graphics processors to train machine-learning software. Applications that gulped down data by the terabyte were thin on the ground. Today, these gargantuan projects are driving the reuse of data within organizations, enabling staff to access a central pool of information from multiple systems. The aim of the game is to give your employees a competitive advantage over their rivals.

Capturing, managing, and sharing data brings challenges involving everything from privacy and security to quality control, ownership, and storage. To surmount those challenges, you need what some folks call a “data strategy.” And you’re probably wondering, rightly, what on Earth does that actually mean?

First, why do you need one?

A strategy at the top coordinates things at the bottom. Without a top-level data strategy, you may end up with multiple projects overlapping each other, creating data sets that are too similar or otherwise in conflict with each other. A data strategy should define a single source of truth in your organization, thus avoiding any inconsistencies and duplication. This approach also reduces your storage needs by making people draw from the same data pool and reuse the same information.

It also helps with compliance. Take, for example, GDPR and similar privacy regulations that give people the right to request and delete data you hold on them. How will your organization painlessly and reliably find these requested records if your piles of data are not properly classified and managed centrally somehow? Your data strategy should set out how you classify and organize your internal information.

A data strategy can also lower the response times for requests for data. If you know what kinds of data you have, where it is all stored and how it is all organized, and that it is clean and and easily accessible, as demanded by your strategy, you can answer queries fast. This is something McKinsey reckons can help reduce your IT costs and investments by 30 to 40 per cent. Finally, a data strategy should define how you determine and maintain the provenance of your enterprise data so that you always know exactly where it came from.

Inside a data strategy

Back to the crucial question at hand: what exactly is a data strategy? According to SAS [PDF], it should cover five components: identification, storage, governance, provisioning, and process. We’ll summarize them here:

Identification
The strategy should insist data is consistently classified and labeled, and define how to do exactly that, so that smoothly and easily sharing information across systems and teams is possible. This must include a standard definition for metadata, such the type of each information record or object, its sensitivity, and who owns it.

This is something you should implement as soon as possible: for example, slipshod front-office data entry today will wreck the quality of your back-end databases tomorrow. Applications should be chosen and configured so that users enter information and metadata consistently, clearly, and accurately, and only enter relevant information.

Storage
This data needs to live somewhere, and your strategy should document where information is held and how it is backed up. Whether it is structured or unstructured, the data should at least be stored in its original raw format to avoid any loss in quality. You could even consider placing it all in what is now called a data lake. To give you an idea of what we mean by this, Goldman Sachs, in its attempt to become the Google of Wall Street, shunted more than 13PB of material into a data lake built on top of Hadoop. This deep pool holds all sorts of material, from research notes to emails and phone call recordings, and serves a variety of folks including analysts, salespeople, equities traders, and sales bots. The investment bank also uses these extensive archives for machine learning.

Data lakes are often built separately to core IT systems, and each one maintains a centralized index for the data it holds. They can coexist with enterprise data warehouses (EDWs), acting as test-and-learn environments for large cross-enterprise projects while EDWs handle intensive and important transactional tasks.

Physically storing and backing up this data in a reliable and resilient manner is a challenge. As the amount of information that can be hoarded increases, factors like scale-out, distributed storage, and the provisioning of sufficient compute capacity to process it all must feature in your data strategy. You must also specify the usual table stakes of redundancy, fail overs, and so on, to keep systems operational in the event of disaster.

For on-premises storage, you can use distributed nodes in the form of just-a-bunch-of-disks (JBOD) boxes sitting close to your compute nodes. This is something Hadoop uses a lot, and it’s good for massively parallel data processing. The downside is that JBOD gear typical needs third-party management software to be of any real use. An alternative approach is to use scale-out network-attached storage (NAS) boxes that include their own management functionality and provide things like tiered storage. On that subject, consider all-flash storage or even in-memory setups involving SAP HANA for high-performance data-crunching.

That’s fine, you say, but what about servers with direct-attached storage containing legacy application data? You don’t necessarily need to ditch all that older kit and bytes right away. With data repositories now so large, it’s difficult to store them in one place, and with data living in so many areas of an organization, you may find that data virtualization is an option. Data virtualization can be used to create a software layer in front of these legacy data sources, allowing new systems to interface with the old.

You don’t need to do it all on-premises, of course. Cloud-based data lakes are a thing, though you may need to compose them from various services. For example, Amazon uses CloudFormation templates to bolt together separate services such as Lambda, S3, DynamoDB, and ElasticSearch to create a cloud-based data lake. Goldman Sachs uses a combination of AWS and Google Cloud Platform as part of its data strategy.

Finally, you need to include forward plans for your organization: you may not need, say, real-time analytics right now, though you may need to add this capability and others like it in future. Give yourself room in your data strategy to expand and adapt, using gap analysis to guide your decisions.

Governance
With all sorts of information flowing in, you don’t want to end up with a cesspool of unmanaged and unclean records, owned and accounted for by no one. Experts call this a data swamp, and it has little value. Avoid this with the next part of your data strategy: governance.

Governance is a huge topic and with so many elements to it, the Data Management Body of Knowledge [PDF], aka the DAMA DMBOK, is a good place to start when drafting your strategy. McKinsey, meanwhile, advises making people accountable for data by grouping information into domains – such as demographics, regulatory data, and so on – and put someone in charge of each area. Then, have them all sit under a central governance unit that dictates things such as policies, tools, processes, security, and compliance.

Process
Your data strategy should not focus on just importing, organizing, storing, and retrieving information. You must document how you will process your pool of raw data into something useful for customers and staff: how will your precious information be transformed, presented, combined and assembled, or whatever else is needed to turn it into a product. The aim here is to plan an approach that does not involve any overlapping efforts or duplicated code, applications, or processes. Just as you strive to reuse data across your organization without duplication, you should develop a strategy that ensures the same applies to your processes: well-defined pipelines or assembly lines that turn raw data into polished output.

Provisioning
Finally, you need to think about getting the data where it is needed. This involves not just defining sets of policies and processes on how data will be used, but also – potentially – changes to your IT infrastructure to accommodate them. Goldman Sachs, for example, published a set of APIs that allowed customers and partners to see and access data in addition to internal users.

Writing it all up

It’s one thing to have aspirations about strategy. Now you have to write it down and stick to it. Don’t be afraid to keep it simple. Be realistic. Make it crystal clear so that staff reading it know exactly what needs to be done. Grandiose and poorly defined initiatives are costly and difficult to implement. With vague goals and big price tags, you can quickly run into trouble with budgets and deadlines.

Break your data strategy and accompanying infrastructure changes into discrete goals, such as providing new reporting services, reaching a particular level of data quality, and setting up pilot projects. Identify the domains of data you intend to create, and develop a roll-out plan for each domain. Dedicate individual teams to each domain: they own it, they clean and maintain it, they govern it, and they provide it.

Your data lake doesn’t have to start off as some vast ocean. It can grow in size and functionality over time, starting off as a resource of raw data that data scientists can experiment with. Later, as it matures, you can integrate it with EDWs and perhaps even use it to replace some operational data stores. There’s nothing wrong with a medium-sized data pond to start with. Goldman Sachs’ data lake contained just 5PB back in 2013.

An effective data strategy will mean tweaking your organization, your governance process, and your data gathering and management processes. More than that, though, it will mean taking a long, hard look at your infrastructure. It isn’t feasible for most companies to wipe their entire IT portfolio clean and start again, though you can modernize parts of it as your strategy evolves. ®

Sponsored: Balancing consumerization and corporate control

US still ‘not prepared’ in event of a serious cyber attack and Congress can’t help if it happens

Politicians appeal to hackers to take up the fight

DEF CON Despite some progress, the US is still massively underprepared for a serious cyber attack and the current administration isn’t helping matters, according to politicians visiting the DEF CON hacking conference.

In an opening keynote, representatives Ted Lieu (D-CA) and James Langevin (D-IL) were joined by hackers Cris Thomas, aka Space Rogue, and Jen Ellis (Infosecjen) to discuss the current state of play in government preparedness.

“No, we are not prepared,” said Lieu, one of only four trained computer scientists in Congress. “When a crisis hits, it’s too late for Congress to act. We are very weak on a federal level, nearly 20 years after Space Rogue warned us we’re still there.”

Thomas testified before Congress 20 years ago about the dangers that the internet could pose if proper steps weren’t taken. At today’s conference he said there was much still to be done but that he was cautiously optimistic for the future, as long as hackers put aside their issues with legislators and worked with them.

“As hackers we want things done now,” he said. “But Congress doesn’t work that way; it doesn’t work at the ‘speed of hack’. If you’re going to engage with it, you need to recognise this is an incremental journey and try not to be so absolutist.”

Three no Trump

He pointed out that the current administration was actually moving backwards, having placed less of a priority on IT security than past administrations. The session’s moderator, former Representative for California Jane Harman, was more blunt, saying that US president Donald Trump had fired his homeland security advisor, Tom Bossert, one of the most respected men in cybersecurity (Bossert actually resigned), and abolished his position.

Representative Langevin noted that the situation was improving. The US had been totally unprepared for Russian interference in 2016, he said, but the situation had improved by the 2018 elections and the intelligence agencies were ready for the 2020 election cycle.

“[Former US president Barack] Obama laid out a framework for a national incident response team,” he said. “That policy is in place, but as to whether it can be executed then we have to hope for the best, but we need to practice it, that’s the key thing.”

Langevin, a repeat visitor to DEF CON, appealed to the assembled security workers to get involved in helping to educate politicians and make them understand technical issues. It is a problem also close to Ellis’s heart.

bruce

You can easily secure America’s e-voting systems tomorrow. Use paper – Bruce Schneier

READ MORE

Ellis, a Brit by birth, came to the US, identified the committees dealing with cybersecurity and started offering advisory services. She found that politicians were willing to listen.

“When I did this, people asked you in to talk,” she said. “They were crying out for people who could talk about cybersecurity. There is interest. It’s hard… but do your research.”

It’s not enough to sit on the sideline and moan, she told the crowd. Instead it’s time for the community to get out there and make a difference.

Lieu also said he was hopeful that hackers would take up the torch and warned attendees not to give up, because change could come in surprising ways.

“In politics everything seems impossible until it happens,” he joked. “10 years ago if you’d told me people in some states would be smoking legal weed I’d never thought it would happen. And yet here we are.” ®

Sponsored: Balancing consumerization and corporate control

Plot twist: Google’s not spying on King’s Cross with facial recognition tech, but its landlord is

More unregulated creepycams blight London

Updated Britons working for Google at its London HQ are being secretly spied on by creepy facial recognition cameras – but these ones aren’t operated by the ad-tech company.

Instead it’s the private landlord for most of the King’s Cross area doing the snooping, according to today’s Financial Times.

“The 67-acre King’s Cross area, which has been recently redeveloped and houses several office buildings including Google’s UK headquarters, Central Saint Martins college, schools and a range of retailers, has multiple cameras set up to surveil visitors,” reported the Pink ‘Un.

King’s Cross is no longer just a London railway terminus and notoriously seedy neighbourhood. The area around the station, once infamous for the types of activities featured in Irvine Welsh novels, has been extensively redeveloped – with tenants now including Google (and YouTube), various other trendy offices, eateries and so on, to the point where it apparently has its own unique postcode.

None of this, however, excuses the reported deployment of creepycams by the developers. They told the FT (paywalled): “These cameras use a number of detection and tracking methods, including facial recognition, but also have sophisticated systems in place to protect the privacy of the general public.”

The Register has contacted the King’s Cross developers’ PR tentacle separately and will update this article if their promised response to our detailed questions is forthcoming.

Tech lawyer Neil Brown of decoded.legal told El Reg that any company running facial recognition cameras “needs to have not only a lawful basis under the GDPR, as is required for any processing of personal data, but also to have met one of the additional conditions for the processing of ‘special category’ data,” basing this, he said, on the assumption that creepycams’ encoding of peoples’ faces would probably count as biometric data under current data protection laws.

Broadly, he told us, whoever’s running the King’s Cross creepycams needs to be certain their use is legal under section 10 of the Data Protection Act 2018, which permits non-consensual data processing for the “prevention or detection of an unlawful act”.

Metropolitan police image via Shutterstock

Metropolitan Police’s facial recognition tech not only crap, but also of dubious legality – report

READ MORE

Indiscriminate use of facial recognition technology in the UK is largely believed to be illegal, though nobody’s quite sure. So far the public conversation has focused upon the antics of police forces, which are very eager to deploy creepycams against the public, arguably in lieu of doing actual policing work. London’s Metropolitan Police deployed a system that was extremely inaccurate, not that it stopped them indiscriminately arresting people based on dodgy matches anyway.

Even though cross-party committees of MPs have called for creepycams to be banned until the risks and pitfalls are properly examined, British police forces have decided that Parliament can be safely ignored without any consequences, with the public forced to rely on controls and safeguards designed in the paper-and-ink days of the 1980s.

Rights group Privacy International commented: “The use of facial recognition technology can function as a panopticon, where no one can know whether, when, where and how they are under surveillance.

In London the creep of pseudo-public spaces, owned by corporations who can deploy facial recognition in what we believe are public streets, squares and parks, presents a new challenge to the ability of individuals to go about their daily lives without fear that they are being watched.

The police are subject to increasing scrutiny about the legality of their deployment of facial recognition, but the use in the commercial and retail sector has received insufficient attention and scrutiny.”

It added: “There is a lack of transparency not only about the use of this technology, but why it is being done, when it is being done, what happens to the images of people going to work, travelling to see family and generally going about their daily lives… These privacy intrusions are problematic regardless of whether or not you believe you have nothing to hide.”

So far, private sector use of creepycams hasn’t been part of the British national conversation about facial recognition tech. Thus, the developers of King’s Cross are about to become the first test case in the court of UK public opinion.

San Francisco became the first major city in the world to ban facial recog, back in May this year. ®

Updated to add at 1547 UTC, 12 August

When The Register asked how many cameras there were, who supplies them and exactly what the safeguards were, a spokesperson for King’s Cross instead decided to emit this quote: “In the interest of public safety and to ensure everyone who visits King’s Cross has the best possible experience, we use cameras around the site, as do many other developments and shopping centres, as well as transport nodes, sports clubs and other areas where large numbers of people gather. These cameras use a number of detection and tracking methods, including facial recognition, but also have sophisticated systems in place to protect the privacy of the general public.”

Sponsored: Balancing consumerization and corporate control

Pentagon makes case for Return of the JEDI: There’s only one cloud biz that can do the job and it starts with an A (or rhymes with loft)

DoD daleks want to exterminate Oracle’s Vulcan mind-meld with White House

The US Department of Defense is pushing back against criticism of its proposed $10bn winner-takes-all cloud mega-deal, dubbed JEDI.

The Pentagon this week emitted a flurry of paperwork and presentations including a slide deck [PDF] on the project and an alleged fact sheet [PDF] addressing condemnation of the IT super-contract.

The response comes as JEDI finds itself in limbo pending a review of the deal by Secretary of Defense Dr Mark Esper at the order of The White House.

In the slide deck, the Pentagon addressed claims JEDI – aka the Joint Enterprise Defense Infrastructure program – was deliberately crafted to give AWS a sweetheart deal. The contract essentially calls for a single vendor to provide the Pentagon worldwide cloud services for a decade.

“The JEDI solicitation reflects the unique and critical needs of DoD, which operates on a global scale and in austere, disconnected environments,” the DoD said in explaining its one-provider specification. “It is important for a warfighter in Afghanistan to access the same information as an analyst in Washington, DC or a service member training in California.”

The document goes on to note that only a handful of companies on the planet can deliver cloud services on the scale Uncle Sam needs, and many of those are in China. Of the five non-Chinese providers deemed capable, four participated in JEDI bidding and two – Microsoft and AWS – made the cut for final consideration.

Still from Star Wars: The last Jedi

This is not the cloud you’re looking for…. Oracle’s JEDI mind tricks work as Trump forces $10bn IT project to drop out of warp

READ MORE

Another point of contention raised was the length of the contract. While JEDI is usually described as a 10-year, $10bn plan, the DoD noted that it has the option to cancel the deal after two years if it is not going well, and the only guarantee on spending is $1m, meaning if the contract is not going as planned, the Pentagon can get out with minimal hits to its budget.

Officials also responded to the allegations that Amazon used dirty tricks, including job offers to one of the government staffers deciding the terms of the deal, in order to get the criteria for the contract tailored to AWS’s strengths.

“This information was alleged in a filing before the US Court of Federal Claims, by a company that was deemed to be non-competitive,” the Pentagon said in its not-so-subtle dig at Oracle, which had hoped to scoop the JEDI gig but was snubbed by Uncle Sam.

“The US Court of Federal Claims did not sustain any of these complaints. Prior to the Court’s ruling, the Department of Defense conducted its own investigations and determined that the integrity of the acquisition remains intact.”

In the end, the DoD said, the JEDI requirements are in place not because they want to give Amazon billionaire Jeff Bezos a gift-wrapped deal, but because it needs a wide-ranging cloud system that has all the necessary security clearances, and can work immediately without the compatibility or redundancy issues that come with a multi-vendor plan.

Besides, the DoD says, there are plenty of other Pentagon cloud contracts to be had.

“The Federal Cloud Computing Strategy – Cloud Smart strategy does not direct agencies to obtain cloud services from multiple vendors,” the department argues.

“Rather, it states the following: ‘agencies will need to use a variety of approaches that leverage the strength of Federal Government’s bulk purchasing power, the shared knowledge of good acquisition principles, as well as relevant risk management practices’.” ®

Sponsored: Balancing consumerization and corporate control

So you can’t find enough cyber-security experts to join the team. Time to dial a managed security service provider?

The benefits of outsourcing your IT’s infosec – and what to look for. Here’s our gentle guide for you

Backgrounder Managed security services are – by revenue – the fastest expanding field of cyber security, according to IDC, which reckons they should grow at a compound annual growth rate of 14.2 per cent to 2022. Gartner says managed and subscription-based security services will account for half of all cyber-security spending by 2020.

Other than the proliferation of cyber security threats that companies routinely have to combat on a daily basis, there are two major drivers.

One is the continuing global shortage of cyber-security professionals that makes skilled staff difficult to find and expensive to hire. The number of respondents reporting problematic shortage of cyber security skills according to a survey conducted by ESG Group in 2018-2019 was 53 per cent, up from 51 per cent in 2017-2018, and 45 per cent in 2016-2017.

As we find it harder to employ security staff, so it becomes practical to outsource cyber-security to those who have managed to snag themselves some experts.

The second driver comes from the need to comply with more stringent data privacy legislation, notably the European Union General Data Protection Regulation (GDPR) that came into force in May 2018. Rather than go it alone, many have put their trust in managed security service providers (MSSPs), who they hope will have the knowledge and experience to help them avoid a costly data breach.

What’s on offer from MSSPs?

Thanks to the rising challenges and growth in cloud-hosting, managed security service providers have evolved to provide a broad range of tools over and above the provision and administration of firewall and intruder detection and prevention systems that defined managed security services (MSS) some years ago. MSSPs routinely deliver a wealth of common security functions that include antivirus and spyware detection, web and email content filtering, endpoint protection, identity access management, virtual private network connectivity, and data encryption, to name just a few. Combinations are bundled into subscriptions that integrate software licenses, hardware rentals and access to management portals.

Patch management and upgrades are staple features of managed security services, along with monitoring and alerting tools for threat detection and weekly security reports. These use data from security logs across networks, devices, applications, and other systems, and they scan for evidence of foiled attacks and proof of suspicious activity from internal and external players. Consulting is also part of the modern mix: identifying security vulnerability and risk assessment using, for example, penetration testing and/or red-team ethical hacking processes to test existing defenses.

What not to expect

And that’s what you get on the tin. Just don’t expect to get everything included as standard.

For example, services like remediation are not always built in to your managed security service as a standard. Rather, the work of cleaning up identified security threats usually comes as a premium option, something that may be beyond the budget and requirement of the average SMB. In most cases, the MSSP will just inform you if it has discovered some vulnerabilities or if a cyber-attack is imminent or underway. They will then leave you to figure out how best to remediate the threat, often hoping to generate additional business should you need to call upon them as the cavalry.

This can be a bit of cold-water shock. As Gartner noted in its Q2 2019 Magic Quadrant for managed security services, monitoring is one thing, remediation quite another. “For other organisations that have little-to-no security team and a lower security operations maturity, the expectations are that the MSSP will do more than just issue an alert and let the customer fend for itself,” it stated. “They need the MSSP to take an active role in analysing, triaging, and then disrupting or containing the threat, i.e. they need the MSS to act as a first-level incident responder for them.”

Access to qualified security analysts can also be minimal. Skilled staff are at a premium with MSSPs, too, so providers will limit the time they spend on the phone to ensure their precious people are held back to help only the most complex issues. Buyers should therefore check whether their service provision includes access to an actual analyst or if they are limited to automated reports delivered to their inboxes.

Similarly, don’t think that the MSSP’s engineers will come a knocking when there’s a technical problem. Though this can probably be arranged as part of a supplemental deal, and fee, the advantage of having all those security tools hosted centrally is the MSSP doesn’t have to leave their own data center to apply patches and upgrades and reconfigure services. Everything can be done through remote access.

How to choose an MSSP

That’s the pitch, and you know what to beware. How, then, should you choose an MSSP?

It’s important to find an MSSP that is flexible enough to offer a customized service that can fit your budget. The thing to understand is that not all of the bigger providers will deliver MSS as a standalone service without requiring parallel spending by you on their accompanying security products. Such services often come courtesy of specialist hardware and software suppliers who have a portfolio of existing security products they want to “add value” to. It’s therefore worth noting that there exists a whole range of other MSSPs. Some of these have converted from generic managed service providers and value-added resellers, and are able to mix and match different services from different providers to offer a more flexible set of options according to scale, performance, and budget.

Product resale constitutes an important part of the revenue stream here, so you need to make a close assessment of where your potential provider is offering something of genuine value, or whether they are simply trying to cut out the middleman so that you take on their products faster.

Having navigated that, what are the characteristics to look for in a managed service?

Ease of use is vital. So look for a provider with a web portal that provides access to threat intelligence and activity reports presented in an easy-to-digest format and that will, ideally, also give assessments of compliance status. The availability of APIs between on- and off-premises tools means MSSPs can feed security monitoring information into other systems and compliance management applications. This is another plus.

Something else to look for is security incident and event management that offers network visibility, email security, threat detection, log management, alerting and compliance reporting under one service. The full package may be too complex and unwieldy for smaller companies, but lighter versions that replicate some of the same functionality will be easier to implement, to manage and to finance.

Not everybody will want or need access to security analysts, but some will like to have the option to occasionally discuss threats with a professional. In that case, make sure your MSSP has sufficient staff expertise, and has a security operations centres that can deliver round-the-clock monitoring and alerting. Make sure that the MSSP’s skills and professed knowledge matches their own systems architecture and regulatory obligations and, where needed, are tailored to the compliance requirements of specific verticals, such as in finance.

The fine print of any contract will be critical, especially service level agreements that will commit the provider to defined response times to things like applying security patches to systems. It can also be a good idea to integrate some form of cyber-security insurance and make sure both sides understand where their respective responsibilities lie for any data breach, particularly when deciding where sensitive information is stored (by country or legal jurisdiction) and how it is processed.

MSSPs are a growing force in IT. While they are certainly expedient, choosing a supplier wisely is important given the complexity and risks involved. Before you enter any relationship, ensure you have a clear understanding of your own requirements, that you have fully vetted the supplier and that you understood their service offering. Finally, set out the terms of the ongoing relationship. Do all that from the outset, and you will hopefully save yourself headaches down the line.

Supported by SonicWall.

Sponsored: Balancing consumerization and corporate control

Talk about unintended consequences: GDPR is an identity thief’s dream ticket to Europeans’ data

Revenge plan morphs into data leak discovery

Black Hat When Europe introduced the General Data Protection Regulation (GDPR) it was supposed to be a major step forward in data safety, but sloppy implementation and a little social engineering can make it heaven for identity thieves.

In a presentation at the Black Hat security conference in Las Vegas James Pavur, a PhD student at Oxford University who usually specialises in satellite hacking, explained how he was able to game the GDPR system to get all kinds of useful information on his fiancée, including credit card and social security numbers, passwords, and even her mother’s maiden name.

“Privacy laws, like any other infosecurity control, have exploitable vulnerabilities,” he said. “If we’d look at these vulnerabilities before the law was enacted, we could pick up on them.”

Pavur’s research started in an unlikely place – the departure lounge of a Polish airport. After the flight he and his fiancée were supposed to travel on was delayed, they joked about spamming the airline with GDPR requests to get revenge. They didn’t, but it sparked an idea to see what information you could get on other people and Pavur’s partner agreed to act as a guinea pig for the experiment.

For social engineering purposes, GDPR has a number of real benefits, Pavur said. Firstly, companies only have a month to reply to requests and face fines of up to 4 per cent of revenues if they don’t comply, so fear of failure and time are strong motivating factors.

In addition, the type of people who handle GDPR requests are usually admin or legal staff, not security people used to social engineering tactics. This makes information gathering much easier.

Over the space of two months Pavur sent out 150 GDPR requests in his fiancée’s name, asking for all and any data on her. In all, 72 per cent of companies replied back, and 83 companies said that they had information on her.

Interestingly, 5 per cent of responses, mainly from large US companies, said that they weren’t liable to GDPR rules. They might be in for a rude shock if that comes before the courts.

Of the responses, 24 per cent simply accepted an email address and phone number as proof of identity and sent over any files they had on his fiancée. A further 16 per cent requested easily forged ID information and 3 per cent took the rather extreme step of simply deleting her accounts.

A lot of companies asked for her account login details as proof of identity, which is actually a pretty good idea, Pavur opined. But when one gaming company tried it, he simply said he’d forgotten the login and they sent it anyway.

The range of information the companies sent in is disturbing. An educational software company sent Pavur his fiancée’s social security number, date of birth and her mother’s maiden name. Another firm sent over 10 digits of her credit card number, the expiration date, card type and her postcode.

A threat intelligence company – not Have I been Pwned – sent over a list of her email addresses and passwords which had already been compromised in attacks. Several of these still worked on some accounts – Pavur said he has now set her up with a password manager to avoid repetition of this.

gdpr

Marketing biz bares folks’ data in the act of asking for their GDPR comms preferences

READ MORE

“An organisation she had never heard of, and never interacted with, had some of the most sensitive data about her,” he said. “GDPR provided a pretext for anyone in the world to collect that information.”

Fixing this issue is going to take action from both legislators and companies, Pavur said.

First off, lawmakers need to set a standard for what is a legitimate form of ID for GDPR requests. One rail company was happy to send out personal information, accepting a used envelope addressed to the fiancée as proof of identity.

He suggested requesting account login details were a good idea, but there’s always the possibility that such accounts have been pwned. A driver’s licence would also be a good alternative, although fake IDs are rife.

Companies should be prepared to refuse information requests unless proper proof is required, he suggested. It may come to a court case, but being seen to protect the data of customers would be no bad thing. ®

Sponsored: Balancing consumerization and corporate control

Hack computers to steal someone’s identity in China? Why? You can just buy one from a bumpkin for, like, $3k

Exploit an 3l33t zero-day and reverse-shell that backend DB proxy server… or simply pay this farmer off

Black Hat Black Hat founder Jeff Moss opened this year’s shindig in Las Vegas with tales of quite how odd the hacking culture in China is.

You see, Moss also founded the DEF CON conference series, and has started running DEF CON events for nerds in China – which makes sense given the sizable reservoir of infosec and computer science talent in the Middle Kingdom. However, he said, when talking to folks over there, he realized quite how different black-hat culture is in Asia.

“I’d assumed internet crime in China was just like over here,” he said. “I was wrong.”

For a start, identity theft is virtually unknown, Moss said. There’s no point in hacking systems to steal strangers’ identities to use for nefarious purposes, because it’s easy to obtain a legitimate identity direct from someone, and assume their persona. Hackers and their ilk simply go into the farming belt and find someone willing to sell their identity, with a typical price around $3,000 per ID. That’s about the annual wage of a low-income person in China, which, don’t forget, is home to about 1,400,000,000 people.

This approach, however, has a few problems, mainly that the same person may sell their identity to multiple hackers. So the first thing anyone using a bought ID does is to check that the same credentials aren’t being used in that geographical locale.

huawei

Huawei website ████ ██████ security flaws ██████ customer info and biz operations at risk: ███████ patched

READ MORE

Denial-of-service attacks within China also work slightly different. Miscreants can bribe Chinese companies to send overwhelming amounts of network traffic to victims’ systems to knock them offline. These requests can be routed to go out through the nation’s Great Firewall, and back in again, obfuscating the source of the packets, apparently. Moss said this technique was surprisingly effective.

There’s also a small headache with content distribution in China, he said, besides, presumably, the government-mandated censorship. There are four ISPs in the country, two dominate the field, and the pair barely talk to each other’s networks. While there are small interconnects, neither internet provider feels the need to expand the bandwidth between them. This forces companies to set up data centers dedicated to each ISP so that all broadband subscribers, regardless of which ISP they want, can smoothly reach those companies’ websites and other online services. This extra gear increases the security and technical burden on system admins.

Admittedly, online organizations in America and other countries tend to spread out their content distribution over national and global networks for reliability, connectivity, and redundancy purposes, though in China it appears to be more of a minimum necessity rather than a luxury due to the lack of cooperation between ISPs.

Turning to IT security in general, Moss said if you want to get things done, you need more than just your boss and your boss’s boss or boss’s boss’s boss onboard – you need the highest level of the company to agree that defending computer networks is a critical must, and not a set of expensive bells and whistles. And that requires clear communication.

“Now we have management’s attention on security we need to know how to communicate with the board,” he said. “Communicate well, and you can get more budget. Do it badly, you could get fired. The quality of communications really matters for security.” ®

Sponsored: Balancing consumerization and corporate control

Apple: Ok, ok, we’ll stop listening in on your Siri conversations. For now, but maybe in the future

Just don’t ask us to stop recording and storing them, says tech privacy leader

Apple has hit pause on contractors listening into and making notes on recordings of people using its Siri digital assistant after the secretive practice was exposed.

Cook & Co has tried to differentiate itself from the rest of the tech industry by stressing its privacy credentials, but it has notably not ended the Siri listening program, saying only that it won’t start it up again until it has “reviewed” the current system.

Apple will also not stop recording or storing the voice interactions that its millions of users have with the digital assistant. But Apple has said it will, in the future, provide the optionfor users to opt out of the system where contractors across the world are allowed to listen in to Siri recordings in order to “grade” them.

It’s not clear whether users will have to actively opt-out of that process or if Apple will default to not listening to people’s requests. Probably the former.

The grading process is used by Apple to figure out if the Siri response was useful, whether the request itself was something Siri should be expected to answer, and, critically, whether the recording should have happened in the first place or was accidental.

A series of media reports into what companies like Amazon, Google and Apple are doing with the recordings of users interacting with their respective Alexa, Google Home and Siri services, has put the tech companies in a privacy spotlight.

They have all been less than transparent about what actually happens with recordings with third-party contractors revealing that they regularly listen to people in their homes doing stuff that clearly isn’t expect to be recorded, including arguing, carrying out business deals, talking to their doctor about medical issues, and having sex.

They’re all at it

Earlier today, Google agreed to stop listening in to recordings made through its Google Home device – in Europe – after the German authorities launched an investigation into the practice to see if it violated European privacy rules. Germany’s data protection commissioner ordered the online giant to stop manual reviews for three months.

Google and Amazon are also being sued in the United States for what the lawsuits claim is illegal recording of minors through these services.

TSA gloves

German privacy probe orders Google to stop listening in on voice recordings for 3 months

READ MORE

When it comes to Siri and Apple’s contractors listening in, the company said in a statement: “While we conduct a thorough review, we are suspending Siri grading globally. Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

What is legal and illegal, reasonable and unreasonable is not a settled issue, although it is notable that all the US tech giants have used that uncertainly to do the most they can. This includes both listening in to and transcribing recordings, even when they are certain that it resulted from an “accidental” recording when the user did not intend to be listening to and the device misheard its “wake word.”

Despite knowing that many of these recordings should never have occurred, the tech companies have not deleted them, and have continued to store and transcribe their contents, presumably in the hope of improving the systems. Privacy advocates are furious.

According to some of the third party contractors that spoke anonymously with reporters, when it comes to Apple’s system, the Apple Watch is the greatest producer of bad recordings. ®

Sponsored: Balancing consumerization and corporate control

German privacy probe orders Google to stop listening in on voice recordings for 3 months

Google bows to power of GDPR

Germany’s data protection commissioner in Hamburg has launched an investigation into Google over revelations that contracted workers were listening to recordings made via smart speakers.

Google has been ordered to cease manual reviews of audio snippets generated by its voice AI for three months while the investigation is under way.

In a blog post last month, Google admitted it works with experts to review and transcribe a small set of queries to help better understand certain languages.

That was after a bunch of Belgian investigative journalists discovered staff were listening in on people who use its voice-activated Google Assistant product.

David Monsees, Google’s product manager of search, wrote at the time: “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”

Johannes Caspar, the Hamburg Commissioner for Data Protection and Freedom of Information, said (PDF): “The use of speech assistance systems in the EU must comply with the data protection requirements of the GDPR. In the case of the Google Assistant, there are currently considerable doubts about this. The use of speech assistance systems must be transparent so that informed consent can be obtained from users.

“In particular, this involves sufficient and transparent information for those concerned about the processing of voice commands, but also about the frequency and risks of misactivation. Finally, due account must be taken of the need to protect third parties affected by voice recordings. As a first step, further questions about the functioning of the speech analysis system need to be answered. The data protection authorities will then have to decide on the final measures that are necessary for their data protection-compliant operation.”

A Google spokesman said: “We are in touch with the Hamburg data protection authority and are assessing how we conduct audio reviews and help our users understand how data is used.

“These reviews help make voice recognition systems more inclusive of different accents and dialects across languages. We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips. Shortly after we learned about the leaking of confidential Dutch audio data, we paused language reviews of the Assistant to investigate.” ®

Sponsored: Balancing consumerization and corporate control

Lancaster Uni cordons off breached systems a week after thousands of folks’ data pinched

Educator, learn thyself. Prevention is better than cure

Lancaster University has started withdrawing non-business-critical access to a breached student database – more than a week after the apparent hack took place.

Following the breach, which affected somewhere between 12,000 and 20,000 people, the northwest England uni has begun pulling staff access to its LUSI (Lancaster University Student Information) records system, which was developed in-house and first went live around five years ago.

In an email sent yesterday and seen by The Register, Heather Knight, director of students, education and academic services, wrote to staff saying: “In response to the recent cyber incident, we are taking steps to enhance the security of all University systems. We are therefore in the process of limiting users’ access to data and functionality in LUSI.”

Lancaster, Lancashire, UK - InfoLab21, School of Computing and Communications, Lancaster University, South Drive, Bailrigg, Lancaster

Lancaster Uni data breach hits at least 12,500 wannabe students

READ MORE

LUSI is the student and applicant records database that was targeted. A 25-year-old man from Bradford was arrested last week on suspicion of Computer Misuse Act crimes and released on police bail.

“In the first instance,” continued Knight’s email, “we are removing all users’ access to LUSI online, except for those staff as identified as needing access for critical business reasons.”

Around a thousand accounts had access to LUSI, a number which sources tell us has been slashed to around 100.

The university website explains that LUSI services include the Course Approvals and Information Tool (CAIT), which is the key system from which at least 12,500 applicants’ personal data were siphoned just under a fortnight ago.

External web access to the LUSI portal appeared to have been disabled when The Register clicked to access it from the above linked webpage. On the weekend immediately following the hack, which is said to have taken place before Friday 19 July, sources told us the staff VPN was also shut down.

A university helpdesk page explaining LUSI states: “LUSI is developed and managed by CIS Academic and the Student Registry in Lancaster and holds data on every student that has ever studied at Lancaster University.”

While this seems alarming on the face of it, a reasonable explanation would be that the university needs to keep a record of who it issued degrees to and when. In its initial statements about the breach, Lancaster said that only applicant data for the academic years 2019 and 2020 was stolen, along with some current students’ data.

In a statement, Lancaster told The Register: “As soon as the university became aware of the breach it took steps, on a risk-based approach, to secure all university systems.”

Despite the arrest of an apparent suspect, it seems strange for the university to take more than a week to revoke unnecessary access to what appears to be the hacked system.

Duncan Brown, chief EMEA security strategist at infosec biz Forcepoint, told The Register: “Phishing, no matter how sophisticated it is, comes down to people being tricked. If you’re not able to understand the normal interactions of people and data, then when a breach occurs with its corresponding anomalous behaviour, it’s very hard to spot and react to it.

“All companies must do better in quickly identifying issues and putting together a plan to address them. Faster incident response and breach handling is a necessity to appease regulatory bodies and maintain competitive advantage, regardless of your industry sector.”

Brown added: “I think 10 days to react isn’t unreasonable, as they don’t want to suspend accounts unnecessarily. But prevention is always going to be preferred to cure.” ®

Sponsored: Balancing consumerization and corporate control

Outraged Virgin slaps IP trolls over dirty movie download data demands

This is a lawsuit, you filthy-minded people

Virgin Media’s lawyers have seen off a group of IP trolls who were trying to force the British ISP to hand over the personal details of people downloading allegedly copyrighted smut flicks.

Mircom International Content Management & Consulting and Golden Eye International both tried to force Virgin to hand over the details of customers who, the two claimed, were “unlawfully” downloading what Mr Recorder Douglas Campbell QC described as “pornographic films”.

“This information is to be requested by the Applicants in batches of ‘no more than 5,000 IP addresses per fortnight’,” wrote the judge in his judgment, adding that Mircom and Golden Eye wanted to send “no more than 500 letters per week” to alleged copyright infringers.

To its credit, Virgin told the court that Mircom and Golden Eye “are companies whose entire business consists of obtaining disclosure orders of this kind, making threats of infringement and offering to settle for a fixed fee” – something neither firm disputed.

Back in 2012, Golden Eye pulled exactly the same trick but the courts agreed with the company and ordered O2 to hand over names, addresses and IP addresses of alleged infringers to Ben Dover Productions. This time round, however, the High Court told them to poke it.

From the thousands of IP addresses it obtained in 2012, Golden Eye sent letters to just 749 people, of whom 76 ‘fessed up, with a further 15 paying them off and settling out of court without accepting liability. Nobody was actually dragged into court that time.

As for the inevitable impact of the GDPR, barrister Robin Hopkins of 11 King’s Bench Walk chambers in London was rather surprised by the judge’s take on it. Writing on his firm’s blog, Panopticon, he said: “The Court found that the GDPR had no bearing on its analysis. Its reasoning, however, was highly suspect, in my humble view.”

He added: “The Court concluded that, once they received those IP addresses (which they proposed to use to contact the underlying individuals and demand money), the applicants would not be ‘controllers’ of that data. Some submissions were made along the lines that ‘to be a controller, you have to be registered with the ICO’. More substantively, the Court’s conclusion was that the applicants would be mere ‘recipients’, which is a term defined under Article 4(9) GDPR… that conclusion (and the whole analysis behind it) is plainly wrong. ‘Controllers’ and ‘recipients’ are not mutually exclusive categories.”

Virgin acknowledged the judgment when The Register asked it to comment. ®

Sponsored: Balancing consumerization and corporate control

Dutch cheesed off at Microsoft, call for Rexit from Office Online, Mobile apps over Redmond data slurping

Cloggies less than chilled out over Windows telemetry

A report backed by the Dutch Ministry of Justice and Security is warning government institutions not to use Microsoft’s Office Online or mobile applications due to potential security and privacy risks.

A report from Privacy Company, which was commissioned by the ministry, found that Office Online and the Office mobile apps should be banned from government work. The report found the apps were not in compliance with a set of privacy measures Redmond has agreed to with the Dutch government.

The alert notes that in May of this year Microsoft and the government of the Netherlands agreed to new privacy terms after a 2018 report, also compiled by Privacy Company, found that Office 365 ProPlus was gathering personal information on some 300,000 workers via its telemetry features and storing them in the US. These included such things such as email addresses and translation requests.

While other Windows and Office apps have been brought in compliance with those rules and no longer gather the user information, the Privacy Company said that the mobile apps and Office online are still gathering information about user activity, as are some of the features in Windows 10 Enterprise.

“Moreover, certain technical improvements that Microsoft has implemented in Office 365 ProPlus are not (yet) available in Office Online,” Privacy Company said,

Dutch hacker

Dutch cops collar fella accused of crafting and flogging Office macro nasties to cyber-crooks

READ MORE

“From at least three of the mobile apps on iOS, data about the use of the apps is sent to a US-American marketing company that specializes in predictive profiling.”

Noting that the Dutch government is still working with Microsoft to get those features removed, the alert advises that government institutions avoid Office Online and the Office mobile apps. Additionally, government offices are being advised to “opt for the lowest possible level of data collection in Windows 10, namely Security.”

Microsoft did not respond to a request for comment on the report and its recommendations.

The report is part of larger battle Microsoft is waging in the EU in the aftermath of GDPR. The Redmond giant has been probed by the EU Data Protection Supervisor for the way its telemetry tools (which help track errors and performance) gather data on users in Europe and then store in on servers based in the US.

Microsoft has maintained that it would work with customers and governments in EU to get all of its products in compliance. ®

Sponsored: Balancing consumerization and corporate control

It’s official: Deploying Facebook’s ‘Like’ button on your website makes you a joint data slurper

Using widgets probably not worth the GDPR minefield

Organisations that deploy Facebook’s ubiquitous “Like” button on their websites risk falling foul of the General Data Protection Regulation following a landmark ruling by the European Court of Justice.

The EU’s highest court has decided that website owners can be held liable for data collection when using the so-called “social sharing” widgets.

The ruling (PDF) states that employing such widgets would make the organisation a joint data controller, along with Facebook – and judging by its recent record, you don’t want to be anywhere near Zuckerberg’s antisocial network when privacy regulators come a-calling.

‘Purposes of data processing’

According to the court, website owners “must provide, at the time of their collection, certain information to those visitors such as, for example, its identity and the purposes of the [data] processing”.

By extension, the ECJ’s decision also applies to services like Twitter and LinkedIn.

Facebook’s “Like” is far from an innocent expression of affection for a brand or a message: its primary purpose is to track individuals across websites, and permit data collection even when they are not explicitly using any of Facebook’s products.

The case that brought social sharing widgets to the attention of the ECJ involved German fashion retailer Fashion ID, which placed Facebook’s big brother button on its website and was subsequently sued by consumer rights group Verbraucherzentrale NRW.

The org claimed the fact that Fashion ID’s website users were automatically surrendering their data – including IP address, browser identification string and a shedload of cookies – contravened the EU Data Protection Directive (DPR) of 1995, which has since been superseded by much stricter General Data Protection Regulation (GDPR).

In 2016, Fashion ID lost in a Dusseldorf regional court, and appealed to a higher German court, with Facebook joining in the appeal. The case was then escalated to the ECJ, with the outcome closely watched by law and privacy experts.

On Monday, the ECJ ruled that Fashion ID could be considered a joint data controller “in respect of the collection and transmission to Facebook of the personal data of visitors to its website”.

The court added that it was not, in principle, “a controller in respect of the subsequent processing of those data carried out by Facebook alone”.

‘Consent’

“Thus, with regard to the case in which the data subject has given his or her consent, the Court holds that the operator of a website such as Fashion ID must obtain that prior consent (solely) in respect of operations for which it is the (joint) controller, namely the collection and transmission of the data,” the ECJ said.

The concept of “data controller” – the organisation responsible for deciding how the information collected online will be used – is a central tenet of both DPR and GDPR. The controller has more responsibilities than the data processor, who cannot change the purpose or use of the particular dataset. It is the controller, not the processor, who would be held accountable for any GDPR sins.

In its response to the ruling, Facebook decided to pretend that the “Like” button was just an average website plugin: “We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools,” Jack Gilbert, Associate General Counsel at Facebook, said in a statement.

“We are carefully reviewing the court’s decision and will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.”

Nothing Facebook does seems to hurt its sales: the company has just reported second quarter results, growing its revenue 28 per cent year-on-year to reach $16.6bn. ®

Sponsored: Balancing consumerization and corporate control

German data regulator ruminates on big 5G question, shrugs: We’ll find Huawei around it

Risks posed by Chinese bogeyman ‘manageable’

Germany’s data protection and security regulator is not too stressed about the supposed threat of using Huawei equipment in 5G networks.

Arne Schönbohm, head of Germany’s Federal Office for Information Security, told Der Spiegel the risks posed by the Chinese outfit are manageable and that a next-generation mobile network made up of equipment from a variety of vendors would be safer.

“There are essentially two fears: First, espionage – i.e. that data will be siphoned off involuntarily. But we can counter that with improved encryption. The second is sabotage – i.e. manipulating networks remotely or even shutting them down. We can also minimise this risk by not relying exclusively on one supplier in critical areas. By possibly excluding them from the market, we also increase pressure on these suppliers.”

Schönbohm said that if a 5G network were to be used for medical services and self-driving cars, it would need to be more secure than today’s mobile networks. That means reviewing and certifying hardware and software for security and banning kit that fails the test.

Huawei store in China with fallen promotional inflatable character on the ground

There’s Huawei too many vulns in Chinese giant’s firmware: Bug hunters slam pisspoor code

READ MORE

He said it would be helpful to analyse source code for some products to check for hidden functions, as GCHQ offshot NCSC does in the UK via the Huawei Cyber Security Evaluation Centre. Some of Huawei software coding was described by HCSEC as “piss poor” but no backdoors have been found.

merkel

Germany tells America to verpissen off over Huawei 5G cyber-Sicherheitsbedenken

READ MORE

Pushed on differences between US and UK approaches, he said: “We’re in close contact with our American and English colleagues, and there are different risk assessments. I think that’s legitimate.”

The British government last week again deferred its decision on whether to ban Huawei hardware from 5G networks in the UK, a decision that was initially expected with the Telecom Supply Chain review in March.

Schönbohm was asked if he had seen hard evidence of Huawei spying, and in response said: “Let me put it this way: if we saw uncontrollable risks, we would not have adopted our approach.”

On wider tech security threats, Schönbohm said ransomware attacks were becoming more widespread and professional, often using several types of Trojan.

With this in mind, the German regulator is taking on 350 additional staff this year. ®

Sponsored: Balancing consumerization and corporate control

Brit infosec firms urge PM Boris to reform the Computer Misuse Act

Let us compete globally, say threat intel outfits

A group of British infosec companies has written to UK prime minister Boris Johnson asking him to reform the Computer Misuse Act 1990, saying the act “has failed to keep pace with technological and market developments, inadvertently prohibiting a large component of contemporary threat intelligence research.”

The companies, comprising NCC Group, Orpheus Cyber, Context Information Security and Nettitude, urged the winner of the Conservative Party’s recent internal leadership contest to bring about “legislative reform to bring cyber crime legislation in step with other regimes”.

Key among the companies’ demands for reform is the introduction of “statutory defences that apply to accredited professionals who act ethically, in the public interest, to detect and prevent criminal activity.”

The letter came after The Register revealed in May this year that while 90 per cent of hacking prosecutions last year were successful, the odds of a prison sentence were very low.

The letter said, in part:

Legislation currently forces cyber security specialists to act with one hand tied behind their backs. Reforming the Computer Misuse Act would enable us to learn more about an attacker’s tactics and identify additional victims, addressing current barriers that often halt our defence investigations so as not to break the law. More modern legislation exists in other jurisdictions – countries which we actively compete with in the global cyber security market. Failure to modernise our laws risks the increasing demand for cyber security services being met outside the UK.

Its signatories, all C-suite execs from the named firms, added: “We believe removing current legislative restraints, and offering certainty to the industry, would significantly unlock the growth of the UK cyber threat intelligence sector, while allowing industry to better support law enforcement and intelligence agencies.”

queen and phil

How a hack on Prince Philip’s Prestel account led to UK computer law

READ MORE

Researchers have long complained that the Computer Misuse Act (CMA) inhibits research because of broad wording that does not make it completely clear what is and what is not illegal in the fast-moving world of infosec. While no statute could be exhaustively prescriptive about what can and cannot be done, the companies say that the time is ripe to give protection to bona fide researchers.

Ollie Whitehouse, global chief technical officer at NCC Group, said in a statement to The Register: “We’re proud to be the driving force behind the necessary reform to the Computer Misuse Act (CMA) – an essential but outdated legislation, which currently restricts many industry specialists like ourselves from carrying out crucial threat intelligence work. Cyber security is a global issue, so it’s vital that the UK is able to compete on a level playing field with our international colleagues.”

The act was last amended five years ago, causing some severe worries among human rights-watchers about harsher sentences being passed.

In its original form, the CMA was passed into law following the escapades of a couple of journalists in the late 1980s who managed to severely embarrass BT and access Prince Phillip’s Prestel email account. ®

Sponsored: Balancing consumerization and corporate control

Apple techies analyzing Siri recordings may have heard you unzipping and bonking – plus more

Including: Facebook code to cram your computer vision model onto tiny chips

Roundup Here’s a quick summary of what’s been happening in the machine learning lately, beyond what we’ve already reported.

Newsflash! Facial recognition systems are still racist: The latest benchmarking tests performed by the National Institute of Standards and Technology, reveals that facial recognition algorithms made by a French startup for immigration screening by the US government struggles with identifying women and people of darker skin.

The latest NIST test results [PDF] published this month shows that AI software from Idemia used to scan the cruise ship passengers coming to the US suffers from racial biases. Their models are least accurate when tasked with identifying black women.

Idemia’s software misidentified black women ten times more frequently than white women. Thankfully, the algorithms aren’t available for commercial use yet, according to Wired.

The issue of racial bias in these machine learning systems is a well-known flaw, and is at the heart of all the controversy surrounding the technology. Demographic problems were raised in two recent congressional hearings on facial recognition. The NIST results are just another reminder that this type of technology still isn’t good enough to use yet, if at all.

New self-driving dataset from Lyft: If you need more data to train your algorithms to drive cars autonomously then look no further.

Lyft, the ride-hailing service, has published a dataset complete with visual inputs processed by the cameras and LiDAR on its self-driving cars, as well as maps of the road.

Important objects, like other cars and pedestrians, have been highlighted with bounding boxes that have been carefully hand annotated by people. You can download it here.

Anonymous data isn’t ever really anonymous: A new research paper published in Nature this month reveals methods that can overturn data anonymisation processes by predicting the identity behind the data.

Researchers from Imperial College London and the Université catholique de Louvain, Belgium have found that a whopping 99.98 per cent of Americans would be correctly re-identified in any dataset that used up to 15 demographic attributes.

To understand how, you’ll have to wade through the mathematical proofs in the paper. The results are pretty startling nonetheless. “They suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model,” the researchers wrote.

Here’s how to compress your convolutional neural network: The best computer vision models are so big after having been trained on large datasets that its difficult to cram them onto low-powered devices.

A team of researchers from Facebook AI Research and the University of Rennes in France have devised a new method that compresses these models so they take up less memory. The structured quantization algorithm works on the “reconstruction of activations, not on the weights themselves,” Facebook explained this month.

The algorithm managed to compress a ResNet-50 model trained on ImageNet with 76.1 per cent accuracy down to 5MB of memory, as well as a Mask R-CNN model to 6MB. Both are 20 and 26 times smaller compared to the original model.

You can read the paper here [PDF] and see the code here.

Contractors working on Apple’s Siri have heard you having sex: Oh dear, human contractors listening to the digital assistant’s audio recordings have apparently listened to illicit drug deals, private medical information, and people having sex overheard by the voice-activated software. Yes, Apple keeps Siri’s audio recordings of you, in case you forgot.

These people are employed by the Silicon Valley giant to investigate any technical errors, for example if the AI bot incorrectly hears “Hey, Siri!” and responds when it wasn’t explicitly activated, or if its replies to requests are unsatisfactory. But in between all that, contractors regularly hear the more intimate details of people’s private lives picked by a Siri device’s microphone. Just the sound of someone undoing a zip can activate the personal assistant, it is claimed.

A whistleblower working as a contractor to Apple told The Guardian: “There have been countless instances of recordings featuring private discussions between doctors and patients, business deals, seemingly criminal dealings, sexual encounters and so on. These recordings are accompanied by user data showing location, contact details, and app data.”

The anonymous contractor believed Apple wasn’t being transparent enough about who could be listening in and what they might be hearing.

Microsoft wheels out trendy ol’ AI for Defender: Microsoft has described some of the machine-learning techniques it has apparently injected into its cloud-based Defender ATP enterprise antivirus to stay one step ahead of malware makers.

Trojan and worm writers typically run their creations through scanning software like Defender, and modify their code until the security tools fail to catch the new nasties. So Microsoft has started using something called monotonic models, based on based on computer science research [PDF] by the University of California, Berkeley, to inspect files and identify malware samples in a new way.

For a start the monotonic models run in Microsoft’s cloud, so if a malware developer wants to try their latest strain against the scanner, they’ll have to upload their samples to Redmond, rather than testing them on an offline machine. This means the Windows giant is automatically tipped off with a load of useful information about the fledgling malware.

Microsoft has been using three different monotonic classifiers running alongside its traditional antivirus software as part of its Microsoft Defender ATP package since 2018, we’re told. The machine-learning technology can block 95 per cent of malicious files, apparently. One of them blocks nasty code on an average of 200,000 devices every month, Redmond claimed this month.

bank

Phuck off, phishers! JPMorgan Chase crafts AI to sniff out malware menacing staff networks

READ MORE

Another way that attackers trick antivirus software is by signing their nasty code with a trusted certificate so that it looks legit. Since monotonic models only analyse features, and don’t consider a file’s certificate, this method of faking certificates is useless against them.

Another increasingly common trick is to surround malware with large chunks of legitimate code to trick the scanner system into thinking the trojan or worm is a harmless normal program. However, Microsoft’s monotonic model can apparently see through such obfuscation techniques.

“Monotonic models are just the latest enhancements to Microsoft Defender ATP’s Antivirus,” said the Defender research team.

“We continue to evolve machine learning-based protections to be more resilient to adversarial attacks. More effective protections against malware and other threats on endpoints increases defense across the entire Microsoft Threat Protection.” ®

Sponsored: Balancing consumerization and corporate control

Don’t fall into the trap of thinking you’re safe and secure in the cloud. It could become a right royal pain in the SaaS

Here’s a gentle introduction to off-prem security for SMBs

Backgrounder Without in-house staffing to set up or manage their IT estates, many small-to-medium businesses (SMBs) have migrated to cloud-based business applications, email, messaging, file sharing, and file-storage services.

But don’t think, just because you’ve handed the keys of your IT estate to somebody else, your days spent having to think about the security of your software, applications, and data are gone. They aren’t. SMBs are very much still in the cross hairs of hackers, from phishing, social engineering, and advanced malware to attacks via product vulnerabilities. And running your tech in the cloud doesn’t necessarily mean you’re immune to all that now.

And don’t forget there are steep costs in cleaning up an intrusion: Europe’s General Data Protection Regulation (GDPR) that activated in May 2018 can put any “data steward” on the hook for up to €20m or four percent of turnover – whichever is the greater – in the event of lost or stolen data. Either figure is enough to put a dent, or worse, into a typical SMB.

It therefore pays to remain vigilant, and know where the vulnerabilities in cloud-based software-as-a-service (SaaS) exist – and what you can do.

Welcome to the front door

The Cloud Security Alliance (CSA), a non-profit organisation that promotes best practice and security assurance within cloud services, has identified the primary security risks associated with hosting infrastructures. A top one is account login credentials used to access SaaS and that are susceptible to hackers who employ methods such as phishing to obtain credentials. In its 2019 Data Breach Investigations Report, Verizon noted a shift in focus among criminals adapting their tactics to hack cloud-based email services using stolen credentials.

Once in, an intruder is free to co-op the account and steal, edit, or delete account data at will (including such vital statistics as billing information). However, there is a secondary risk: once inside, an intruder can gain increasing levels of access to more critical systems – both cloud-hosted and on-premises. A compromised SaaS account can be hijacked to host malicious software that disguises itself to launch cyber-attacks on other systems. This type of “cloudjacking” has claimed some major victims in the last couple of years, including car maker Tesla. Elon Musk’s company was hit by hackers using a Kubernetes administrative console for cloud application management. They obtained AWS login details and established a cryptojacking operation based on the Stratum bitcoin mining protocol.

Additional weak points, according to the CSA, are external user interfaces and APIs, such as RESTful APIs deliberately exposed by the service provider for the benefit of developers and third parties and that should be protected but sometimes aren’t. Salesforce in 2018 revealed the existence of a vulnerability affecting customers using its Market Cloud Email Studio and Predictive Intelligence services had been caused by an API error.

One of the mantras coming from the cloud community is security – that they are the experts in data centres and service provisioning, who can secure their systems better than mere mortals. But events have proved they are as human as the rest of us. Malware has historically targeted servers, but recent years have seen attackers target system-level components, with variants like Meltdown, Spectre, and Foreshadow exploiting vulnerabilities in the same server CPUs and virtual machines (VMs) used by cloud providers as everybody else.

Analyst Canalys noted in 2018 how cloud-service providers were quick to reassure customers over the reliably of their services in the wake of Meltdown, but were also likely to try to reduce their reliance on Intel’s Xeon processors that were susceptible to attack so as to avoid becoming exposed in the future. Containers are also proving a risk. A vulnerability discovered in the runC runtime that’s the basis of Docker and other container engines lets hackers’ code break out of the container’s sandbox and gain root access to the host server.

When did you last check your AWS S3 security? Here’s four scary words: 17k Magecart infections

READ MORE

Human error is another problem, and it manifests itself in various ways. For instance, in the failure to patch known software and system vulnerabilities. A Ponemon Institute study [PDF] of 3,000 IT pros on behalf of ServiceNow found half of organisations were hit by one or more data breaches in the last two years. The rub is that 34 per cent say they knew their systems were vulnerable prior to attack, and 57 per cent were breached using a known but unpatched vulnerability.

We know data centres are complex environments and vulnerabilities caused by human error are common, but SaaS doesn’t eliminate the risk – it just puts it on a different level. According to Symantec’s latest Internet Threat Security Report, misconfigured servers and cloud infrastructure are a big target for hackers, with 70 million records stolen or lost from poorly configured Amazon Web Services (AWS) S3 buckets in 2018 alone. That’s a problem when you consider that some smaller SaaS providers have chosen to build their services on top of the AWS plumbing, and may therefore have configured things incorrectly.

Practical SaaS security protection

These architectural and plumbing problems can be damaging, though the task of getting around them is not insurmountable. SaaS providers do not go out into the world naked, but come, instead, wrapped in layers of control and defense. Having recognized that their basic security protection is not enough for some, service providers have begun to integrate third-party security tools into their cloud services and certify them for download, giving SMBs a wide range of protections to choose from.

Basic infrastructure-level protection is also common. This includes single sign-on (SSO) for user authentication that in the context of SaaS lets you access different on- and off-premises systems from the same device. Secure socket layer (SSL) certificates, encryption keys, Kerberos or security assertion mark-up language (SAML) protocols and two factor (2FA) or multi factor authentication (MFA) together provide much tighter protection against account takeovers and user credential compromises.

Cloud access security brokers (CASB) are now becoming more widespread and more effective, giving SMBs a defined tool that can automatically identify and control the SaaS applications being accessed by your employees. CASB can monitor and sanction transfer of data between on- and off-premises locations, with the ability to set permissions that govern what data is uploaded and with whom it’s shared.

Virtual firewalls can be deployed directly on SaaS infrastructure to protect data, using software-based micro-segmentation to monitor the workload and application traffic that passes between VMs in the data centre. This can help to prevent the lateral movement of malware across the virtual environment and stop it taking over virtual systems. Such firewalls also help in compliance and security governance, as you can set security policies and establish defined reporting processes. Cloud-based analytics engines can be used to monitor network traffic and content in real time for signs of unusual activity that could indicate a cyberattack is imminent or underway.

china hacking

Hey China, while you’re in all our servers, can you fix these support tickets? IBM, HPE, Tata CS, Fujitsu, NTT and their customers pwned

READ MORE

If you are running a hybrid cloud – a combination of some service provider and your own, on-prem-based software or data – you have additional options. These include physical firewalls that protect data traffic transmitted between your offices and the SaaS provider, with direct-access virtual private network tunnels, and end-to-end encryption providing additional protection for any device accessing a hosted application.

This, however, does put more responsibility back on you. That could be a problem if you are short of the kinds of IT staff and resources common among most SMBs – and that may have been a factor in your decision to implement SaaS in the first place. Almost half (47 percent) of those surveyed by Ponemon said they had no understanding of how to protect their companies against cyberattacks.

Love it or hate it, SaaS is a force in IT. Want the power of the big dogs minus the hassle or cost of owning the software? SaaS it. Just don’t be lulled into thinking because you’ve outsourced the software all those years spent worrying about keeping applications, systems, data and users safe are also gone. Quite the contrary. Beneath the covers, SaaS is a security spaghetti: a meal of problems from the old world with a fresh selection of new worries running on a bigger scale. There are measures you can take to secure your SaaS – it’s just a matter of knowing what to look for.

Supported by SonicWall.

Sponsored: Balancing consumerization and corporate control

Backdoors won’t weaken your encryption, wails FBI boss. And he’s right. They won’t – they’ll fscking torpedo it

Give it a Wray, give it a Wray, give it a Wray now: Big Chris steps in to defend blowing a hole in personal crypto

FBI head honcho Christopher Wray is rather peeved that you all think the US government is trying to weaken cryptography, privacy, and online security, by demanding backdoors in encryption software.

During a session at the International Conference on Cyber Security at Fordham University, New York, Wray backed a proposal mooted earlier this week by US Attorney General William Barr: that the cops and Feds should be able to spy on end-to-end encrypted chats and the like.

Barr basically wants mobile apps and other software used by people to hold private conversations and protect their files and information to be backdoored so police and g-men, armed with warrants, can gain access to and decrypt said data on demand.

Wray reiterated the same tired talking points as the Attorney General about more and more criminals going dark and so forth, though he then came up with a rather odd declaration.

“I’m well aware that these are provocative subjects in some quarters,” the FBI Director opined. “I get a little frustrated when people suggest that we’re trying to weaken encryption — or weaken cybersecurity more broadly. We’re doing no such thing.”

Except, you know, that’s exactly what he’s calling for. Top crypto boffins are in agreement that putting a backdoor in an encryption system is easy to do, though mathematically impossible or difficult to implement in such a way that unauthorized persons – think miscreants, spies, rogue or bumbling insiders at tech companies – can’t find and exploit said backdoor. Nevertheless, Wray thinks otherwise.

He continued:

It cannot be a sustainable end state for us to be creating an unfettered space that’s beyond lawful access for terrorists, hackers, and child predators to hide. But that’s the path we’re on now, if we don’t come together to solve this problem.

So to those resisting the need for lawful access, I would ask: What’s your solution? How do you propose to ensure that the hardworking men and women of law enforcement sworn to protect you and your families maintain lawful access to the information they need to do their jobs?

barr

Low Barr: Don’t give me that crap about security, just put the backdoors in the encryption, roars US Attorney General

READ MORE

This is where it all goes off the rails. On the one hand, Wray wants to crack encryption so he can snoop on, unmask, and break down the door of, among other scumbags, hackers. And yet, he wants to crack encryption in such a way that, er, hackers can snoop on and unmask citizens by exploiting deliberately introduced weaknesses. In his pursuit of hackers across the nation to protect citizens, he’s potentially tearing down the walls that keep hackers out of citizens’ private spaces.

“I know we’ve started hearing increasingly from experts like cryptographers and cryptologists that there are solutions to be had that account for both strong cybersecurity and the need for lawful access,” he rumbled on. “And I believe those solutions will be even better if we seek them together.”

Yes, there will always be “experts” trying to sell the US government lucrative pie-in-the-sky solutions to this backdoor quandary. Any decent proposed solution will face intense testing and scrutiny. Wray also praised some tech corps for working with the FBI. He cited instances where images of children being sexually abused were posted online using an anonymizing app. FBI investigators worked with the app’s developers to identify the perpetrators, and they were then brought to justice, it is claimed. ®

Sponsored: Balancing consumerization and corporate control

Phuck off, phishers! JPMorgan Chase crafts AI to sniff out malware menacing staff networks

Machine-learning code predicts whether connections are legit or likely to result in a bad day for someone

JPMorgan Chase is integrating AI into its internal security systems to thwart malware infections within its own networks.

A formal paper [PDF] emitted this month by techies at the mega-bank describes how deep learning can be used to identify malicious activity, such as spyware on staff PCs attempting to connect to hackers’ servers on the public internet. It can also finger URLs in received emails as suspicious. And it’s not just an academic exercise: some of these AI-based programs are already in production use within the financial giant.

The aim is, basically, to detect and neutralize malware that an employee may have accidentally installed on their workstation after, say, opening a booby-trapped attachment in a spear-phishing email. It can also block web-browser links that would lead the employee to a page that would attempt to install malware on their computer.

Neural networks can be trained to act as classifiers, and predict whether connections to the outside world are legit or fake: bogus connections may well be, for example, attempts by snoopware on an infected PC to reach the outside world, or a link to a drive-by-download site. These decisions are thus based on the URL or domain name used to open the connection. Specifically, long-short term memory networks (LSTM) used in the bank’s AI software can predict if a particular URL or domain name is real or fake. The engineers trained theirs using a mixture of private and public datasets.

The public datasets included a list of real domains scraped from the top million websites as listed by Alexa; they also used 30 different Domain Generation Algorithms (DGA), typically used by malware, to spin up a million fake malicious domains. For the URL data, they took 300,000 benign URLs from the DMOZ Open Directory Project dataset and 267,418 phishing URLS from the Phishtank dataset. The researchers didn’t specify the proportion of data used for training, validation, and testing.

You may think just firewalling off and logging all network traffic from bank workers’ PCs to the outside world would do the trick in catching naughty connections, though clearly JP Morgan doesn’t mind its staff reading the likes of El Reg at lunch, and thus has turned to machine-learning to improve its network monitoring while allowing ongoing connections, it seems.

How it works

First, the string of characters in a particular URL or domain name to be checked are converted into vectors and fed into the LSTM as input. The model then spits out a number or probability that the URL or domain name is bogus.

AI

AI-powered IT security seems cool – until you clock miscreants wielding it too

READ MORE

The LSTM was able to a performance of 0.9956 (with one being the optimal result) when classifying phishing URLs and 91 per cent accuracy for DGA domains, with a 0.7 per cent false positive rate. AI is well adapted to discovering the common patterns and techniques used in malicious software, and can even be more effective than traditional URL and domain-name filters.

We asked the eggheads to describe what features the model learned when identifying whether something is benign or malicious, but they declined to comment. It’s probably things like typos in words or random snippets of characters and numbers jumbled together.

“Advanced Artificial Intelligence (AI) techniques, such as Deep learning, Graph analysis, play a more significant role in reducing the time and cost of manual feature engineering and discovering unknown patterns for Cyber security analysts,” the researchers said.

Next, they hope to experiment with other types of neural networks like convolutional neural networks and recurrent neural networks to clamp down on the spread of malware even further. Watch this space. ®

Sponsored: Balancing consumerization and corporate control