50 roles shifted off to India
DXC Technology is sending hundreds of security personnel from the America’s division down the redundancy chute and offshoring some of those roles to low-cost centres, insiders are telling us.
As revealed by The Register at the back end of March, the outsourcing badass cum cloud-wannabe confirmed the security practice within the Offering division needs to purge $60m in expenses in the current fiscal ’20 that began on 1 April.
A chunk of that is to be generated by redundancies, with some 300 people – 45 per cent of the US’s security team – being laid off. We are also told that 50 roles are being moved to India, but it is not clear if other roles will move centres in the Philippines, Vietnam and Eastern Europe.
Teams across DXC Security in Data Protection and Privacy, Security Incident Event Management, Technical Vulnerability Management, and Security Risk Management are all impacted too. The process started in May and is to be wrapped up by next month.
DXC Security exec: Yes, I’d have thought we’d spend more on certs and laptop kit for staff, too
The entire US Managed Proxy – save for one engineer who was let go last month to hit financial targets – is to be made redundant on 28 June. But rather than a straight workforce redundancy, this is classified as a workforce migration, we are told.
An impacted DXCer told us the Managed Proxy team were last month given five-and-a-half weeks’ advance notice to help the accounts they manage migrate the design, implementation and support work to a DXC team in India under the control of Biswajeet Rout, who already runs the legacy CSC network, proxy and security team in the country.
One staffer claimed teams are being shunted to India and some are “having to train their replacements who do not have the experience of the staff [being made redundant]”.
We were also told that contractors will be used to cover gaps where full time employees have left the organisation.
El Reg has been told that Mark Hughes, who previously ran BT’s internal tech security and its go-to-market security sales before rocking up at DXC in December, is trying to address changes in the security market involving cloud, AI and automation while also juggling DXC’s desire to reduce the division’s costs by $60m.
Sources told us DXC will try to update skills, concentrate certain one in global delivery centres that will be created in the US and Europe, and house some lower margin, or commoditised security in lower cost areas.
Platform DXC will play a major role in the division to automate service delivery, and patching, for example, is one of the areas to be addressed in this way.
Other cost savings are expected to come from things like vendor consolidation: this means there will be fewer certifications to maintain across the various teams, which is costly and time consuming. A team has been assembled to decide which vendors the firm will stick with.
DXC: Slashing costs affects ability to attract, develop and retain staff? Who’d have thunk it!
In related news, sources have also told us that Dean Clemons, global SC&C services leader at DXC, has quit. Quint Ketting has replaced him on an interim basis until a permanent successor is found.
Clemons has warned his troops of “structural changes” – some middle managers have already gone. As he’d said in a March conference call – which El Reg heard a recording of – DXC is moving to a set-up based on industry verticals rather than being practice-specific.
A DXC spokesman told us:
“The security landscape is changing, and our global clients need different types of services as they progress through their digital transformation. At the same time, security skills are becoming both more specialized and more scarce. We therefore need to look worldwide to fulfill these changing requirements.” ®
He then doubled down on spies’ ‘ghost user’ backdoor plan
Solving the Huawei 5G security problem is a question of convincing the Chinese to embrace British “fair play”, security minister Ben Wallace said yesterday without the slightest hint of irony.
During a Q&A at Chatham House’s Cyber 2019 conference, Wallace said the issue of allowing companies from non-democratic countries access to critical national infrastructure was about getting them to abide by, er, Western norms.
The former Scots Guards officer explained: “I take the view: we’re British, we believe in fair play. If you want access to our networks, infrastructure, economy, you should work within the norms of international law, you should play fair and not take advantage of that market.”
Someone speaking later in the conference, who cannot be named thanks to the famous Chatham House Rule*, later commented: “If we don’t trust them in the core, why should we trust them in the edge?”
Nonetheless, he later expressed regret at Chinese dominance of the 5G technology world, saying: “The big question for us in the West is actually, how did we get so dependent on one or another? Who is going to be driving 6G? How are we, in our society, going to shape the next technology to ensure our principles are embedded in that tech? That’s a question we should ask ourselves: were we asleep at the wheel for the development of 5G in the first place?”
The security minister also doubled down on GCHQ’s controversial and deeply resented proposal to backdoor all encrypted communications by adding themselves as a silent third participant to chats and calls – thus rendering encryption all but useless.
“Under the British government,” he said, “there is an ambition that there is no no-go area for properly warranted access when required. We would like, obviously, where necessary, to have access to the content of communications if that is properly warranted, oversighted, approved by Parliament through the legislation, of course we would. We’re not going to give up on that ambition… there are methods we can use but it just changes our focus. As long as we do it within the law, well warranted and oversighted.”
This contrasts sharply with previous statements by GCHQ offshoot, the National Cyber Security Centre (NCSC), that the government needs a measure of public support before it starts harming vital online protections. At present, Britain’s notoriously lax surveillance laws allow police to hoover up the contents of your online chats and your web browsing history, including precise URLs. This is subject to an ongoing legal challenge led by the Liberty human rights pressure group.
As the minister of state for security and economic crime, Wallace’s wide-ranging brief covers all national security matters, from terrorism to surveillance powers to seeing hackers locked up.
In his keynote address to the conference, Wallace also declared he wants the British public “protected online as well as they are offline” as he gave the audience of high-level government and private sector executives a whistle-stop tour of current UK.gov policy and spending on cybersecurity. One part of that is a push to get better security baked into Internet of Things devices, part of which is the NCSC-sponsored Secure by Design quasi-standard.
The government has also begun prodding police forces to start setting up cyber crime units, with Wallace confirming that “each of the 43 forces [in England and Wales] now have a dedicated cyber crime unit in place”. ®
* The Chatham House Rule states that what is said at a particular meeting or event may be repeated but not attributed.
NASA’s JPL may be able to reprogram a probe at the arse end of the solar system, but its security practices are a bit crap
Office of the Inspector General brings lab back down to Earth
NASA’s Jet Propulsion Lab still has “multiple IT security control weaknesses” that expose “systems and data to exploitation by cyber criminals”, despite cautions earlier this year.
Following up on a strongly worded letter sent in March warning that NASA as a whole was suffering cybersecurity problems, the NASA Office of the Inspector General (OIG) has now released a detailed report (PDF).
Its findings aren’t great. The JPL’s internal inventory database is “incomplete and inaccurate”, reducing its ability to “monitor, report and respond to security incidents” thanks to “reduced visibility into devices connected to its networks”.
Houston, we’ve had a problem: NASA fears internal server hacked, staff personal info swiped by miscreants
One sysadmin told inspectors he maintained his own parallel spreadsheet alongside the agency’s official IT Tech Security Database system “because the database’s updating function sometimes does not work”.
An April 2018 cyberattack exploited precisely this weakness when an unauthorised Raspberry Pi was targeted by an external attacker.
A key network gateway between the JPL and a shared IT environment used by partner agencies “had not been properly segmented to limit users only to those systems and applications for which they had approved access”. On top of that, even when JPL staff opened tickets with the security helpdesk, some were taking up to six months to be resolved – potentially leaving in place “outdated compensating security controls that expose the JPL network to exploitation by cyberattacks”.
No fewer than 666 tickets with the maximum severity score of 10 were open at the time of the visit, the report revealed. More than 5,000 in total were open.
Indeed, such a cyberattack struck the whole of NASA back in December. Sensitive personal details of staff who worked for the American space agency between 2006 and 2018 were exfiltrated from the programme’s servers – and it took NASA two months to tell the affected people.
Even worse, the JPL doesn’t have an active threat-hunting process, despite its obvious attractiveness to state-level adversaries, and its incident response drills “deviate from NASA and recommended industry practices”. The JPL itself appears to operate as a silo within NASA, with the OIG stating: “NASA officials [did not] have access to JPL’s incident management system.”
Perhaps this report will be the wakeup call that NASA in general, and the JPL in particular, needs to tighten up its act. ®
Privacy browser reckons personalised advertising = personal data processing
Lawyers for the privacy-focused Brave browser have written to the UK’s Information Commissioner’s Office (ICO) with what they claim is evidence that Google’s online ad-selling policies break the EU’s General Data Protection Regulation (GDPR) – namely Article 5(1)(f).
Brave kicked off this fight back in September last year. At the heart of their battle is a claim that “personalised advertising” by Google counts as personal data processing. Broadly, they say Mountain View’s adtech empire is too vast, sprawling and automated to be fully compliant with the law.
Article 5(1)(f) of the GDPR states that personal data must be “processed in a manner that ensures appropriate security… including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures”.
In yesterday’s letter, Brave’s lawyers urged the ICO to join its fellow data cops in Ireland with their investigation into Google. They also want the ICO to widen its own enquiries to include 2,000 Google Authorised Buyers, whom it named in a spreadsheet forwarded to the data protection bods, along with strongly worded pleas to start an investigation.
Interestingly, Brave’s Johnny Ryan highlighted a report produced by US adtech critics DCN, which he said proved that online news outlets (i.e. the people who are most voluble about the damage done to their industries by Google and Facebook’s online ad duopoly) would benefit from an EU ban on personal data being used for ad targeting.
In response to all this, an ICO spokesperson told us: “The data protection implications of adtech are of interest to the ICO. We are currently concentrating on the ecosystem of programmatic advertising and real-time bidding (RTB). This aligns with our Technology Strategy, where both online tracking and artificial intelligence are highlighted as priority areas.
“We have been engaging with representatives of the adtech industry and recently hosted an event to discuss the data protection implications of current and future industry practices.” ®
Tick, tick, boom?
Column Last year I bought one of those nifty new fitness tracker wristwatches. It counts my steps and gives a me bit of a thrilling buzz on when I’ve reached my daily goal. A small thing, but it means a lot.
This means I’m always under surveillance – in the best possible sense, my fitness tracker has its eye on me, continuously monitoring my motion, inertia, acceleration and velocity. It computes the necessary maths to turn those into steps and (kilo)calories. It keeps an extensive database of my activities, moment to moment.
Put like that, it sounds a bit suspicious. After all, why would anyone or anything need to keep such a close eye on anyone? But if I want to keep myself moving – and motivated – it makes sense to open up my private world, strap a sensor on, and let it listen.
This is a delicate point because our sensors don’t always let us know when they’re listening – something that has come back to bite Amazon among others. But the bigger question, inevitably, comes down to what happens with that data once it’s gathered? Where does it go? How does it get used, and for the benefit of whom?
My fitness tracker is just smart enough to create a data trail, but not quite smart enough to go rogue with the data it gathers. It downloads to an app, and from there I can control its distribution to the world – or so I choose to believe.
But there are far too many other points in this world where data is gathered, invisibly and unacknowledged. That data – even though we generate it – does not belong to us.
I wonder how I’d feel if my fitness tracker fed all my stats to someone else – someone I wouldn’t ever know – and never told me anything. I’d probably wonder why I bothered to wear it, but I’d also worry about how that data might be used. Against me.
Suppose if my fitness tracker issued a soft buzz every time I passed a cafe, and told me I’d earned a nice cake? Within a month I’d gain twenty kilos, led down the garden path by a device that had gathered enough intimate details about me to know just the right way to nudge me away from my better interests.
As organisations gather huge stockpiles of data, they seem to grow increasingly tightfisted with their data and insights. They’ve found a gold mine – why share? The problem with this line of reasoning is that it quickly dead-ends in a world where the only conceivable use of data is as zero-sum competitive advantage: “I know something you don’t.”
If a quarter-century of the web has taught us anything, it’s that “a resource shared is a resource squared”. Your data may be nice, my data may be better – but it’s only when we work together that we can make something truly worthwhile.
The standout organisations of the middle 21st century build value chains for data – paralleling the material value chains that drove the last century. This new age of “data welfare” sees data resources married, multiplied, shared and amplified.
I’m looking forward to a day when my fitness tracker talks to both my GP and my grocer [how about your health insurer? – Ed] so I can keep my health and my diet aligned with my activities. In a world where we’re all in this together, building bridges with data – not walls. Let the the dog-eat-dogs of data warfare sleep. ®
Get thee down to the pub, fix out over the weekend maybe
Docker botherer Quay.io’s webhook integration with Bitbucket is looking a bit green around the gills.
It followed that up with a warning that by the end of April 2019 it would be making some wholesale changes to Bitbucket user objects, among others, to hand over a bit more control of what data is available to whom.
To quote an anonymous Register reader: “It appears Quay.io didn’t get Atlassian’s memo.”
He went on to tell us: “I’ve been getting attacked left, right and center by developers since yesterday afternoon.” And an enraged developer can be a fearsome thing.
The problem means that one of Quay.io’s party tricks, automated builds of containers, is a no-no for Bitbucket users using webhooks to link the platforms.
The idea of Quay.io’s service is “to automate your container builds, with integration to GitHub, Bitbucket, and more”. Sure, but only if you keep track of API changes.
At the time of publication, the status page for Quay.io notes the borkage (referred to a “Partial Outage”) as: “Due to a recent change in Bitbucket’s API, Bitbucket triggers are currently non-operative. We are working on a fix to address this change from Bitbucket.”
Which seems a little harsh since Atlassian has hardly concealed its privacy plans. A hardworking support operative at Quay.io confirmed the problem was indeed that pesky API tweak, but said the company’s developers were working to get a fix out over the weekend.
Quay.io is the hosted incarnation of Red Hat’s on-premises container registry service and came as part of the firm’s acquisition of CoreOS at the beginning of 2018. CoreOS purchased Quay back in 2014. A popular pricing option for the hosted service is $60/month for 20 private repos, although solitary devs can score the service for $15/month for five private repositories.
So long as they don’t want any of that automation nonsense with Bitbucket, of course. Until Quay.io deals with the problem, builds will need to be kicked off via a manual upload or some custom git integration. ®
Immerse yourself in forensic training with autumn
Promo If you work in digital forensics or incident response and would like to advance to a higher level, the annual Digital Forensics and Incident Response (DFIR) event staged by security training company SANS is a must.
This year’s SANS DFIR Europe Summit and Training 2019 event takes place in Prague from 30 September to 6 October. The one-day summit on 30 September brings together leading DFIR experts to share their experiences, case studies, and stories from the field. Summit attendees will explore real-world applications of innovative solutions, new tools, techniques, and artifacts from all aspects of the fields of digital forensics and incident response.
Complement your summit attendance and elevate your skills to the next level with the following training courses from 1-6 October. SANS are hosting a range of eight DFIR-focused courses, six of which offer the chance to gain a valuable GIAC certification:
Advanced incident response, threat hunting, and digital forensics
Chances are your systems are already under threat. The key is to be on constant alert for attacks that have found their way past security systems and to catch intrusions in progress, before the hackers have done their worst. Threat-hunting examines the network to spot and stop security breaches, noting malware patterns and behaviours to generate useful threat intelligence.
Advanced network forensics: threat hunting, analysis, and incident response
Whether you’re handling a case of intrusion, data theft, or employee misuse, the network often provides the best evidence. Examine various use cases to learn the skills needed for today’s growing focus on network communications in investigations.
Security essentials bootcamp style
Do you know why some organisations get compromised? Could you find threatened systems on your network? Are you sure all your security devices are effective? Are proper security metrics set up and communicated to your executives? Expert hints-and-tips will help you fight off the cybercriminals.
Windows forensic analysis
The mountains of data commonly held on Windows systems contain evidence of fraud, threats, industrial espionage, employee misuse, and intrusions. Learn how to recover data, track user activity, and organise findings for investigations and litigation. Hands-on lab exercises focus on Windows 7, Windows 8/8.1, Windows 10, Office and Office 365, cloud storage, SharePoint, Exchange, and Outlook.
Mac and iOS forensic analysis and incident response
Apple devices are everywhere, from coffee shops to corporate boardrooms. Acquire the forensic analysis and response skills you need to investigate any Mac or iOS device.
Advanced memory forensics and threat detection
Examine RAM to discover what happened on a Windows system. The course involves freeware and open-source tools, and shows how they work. An introduction to macOS and Linux memory forensics is also included.
Smartphone forensic analysis in-depth
Learn the ins and outs of mobile devices: where to find evidence, how the data got there, how to recover deleted data, how to decode evidence, and how to handle applications that use encryption.
Reverse-engineering malware: malware analysis tools and techniques
A popular course using monitoring utilities, a disassembler, a debugger, and other free tools to examine malicious programs that target Windows systems. End the course with a series of Capture-the-Flag challenges.
Plus: Level Up
Data security breaches and intrusions are growing more complex. Adversaries are no longer compromising one or two systems in your enterprise; they are compromising hundreds. Are your forensic skills up to scratch? SANS Institute has launched a new campaign in EMEA called Level Up to encourage people to test their cyber security knowledge and to help highlight the cyber security skills gap.
Own goal: $280,000 GDPR fine for soccer app that snooped on fans’ phone mics to snare pub telly pirates
La Liga says privacy watchdog is Barca-ing up the wrong tree
A top Spanish soccer body is facing a six-figure GDPR fine for inappropriately and covertly accessing the microphones of fans using its cellphone app.
La Liga – the highest men’s professional division of the Euro nation’s football league system – must cough up the €250,000 ($280,000, £222,000) penalty after it was slapped by Spanish watchdog AEPD for breaking Europe’s tough regulations safeguarding privacy.
Here’s how the soccer organization got the red card, according to news outlet El Diario: the league’s official Android and iOS mobile app, which offered live match scores and has been downloaded roughly 10 million times, would, once given permission, regularly access the microphone to check if the user was in a pub watching the footie on a telly or a similar setting.
If it sounded as though they were in a boozer while glued to a TV, say, the user’s location would be used by the software’s overlords to verify the punter was in an establishment that had all the right paperwork and subscriptions for showing the game in a commercial setting. It was a way to ensure sports bars weren’t show matches using cheaper home cable or TV packages rather than more expensive commercial subscriptions.
It’s a devious method to keep an eye on pirates skirting their bills, but, as the AEPD ruled, it was also a violation of GDPR.
Year 1 of GDPR: Over 200,000 cases reported, firms fined €56 meeelli… Oh, that’s mostly Google
The data-protection watchdog ruled this week that La Liga did not adequately inform users about its monitoring practices when the software was installed and run on mobile devices: it simply asked once if it could use the mic, rather than make clear it was repeatedly accessing the audio sensor – as much as once per minute during matches. Because of that, it was determined that La Liga was improperly collecting the personal data of users.
Even though La Liga’s app initially asked for permission to access the microphone, users could not be expected to understand and remember exactly how the recordings would be used and what exactly it was they were consenting to, in other words.
La Liga is planning to challenge the ruling in court.
“La Liga disagrees profoundly with this decision, rejects the penalty imposed as unjust, unfounded and disproportionate and considers that the AEPD has not made the necessary efforts to understand how the technology works,” the league told Reuters this week.
“As a result, it will challenge the ruling in court to demonstrate that its actions have always been responsible and in accordance with the law.” ®
Taxpayers taken for ‘mugs’ as UK.gov contract awards surface
Updated AWS has been accused of treating the British public like “mugs” after it emerged HMRC splashed £11m with the cloud giant last year, more than six times the amount it received in corporation tax from the US firm.
In total, UK government has awarded the cloud arm of Amazon 36 public sector contracts worth £660m in the past four years, according to a study commissioned by the GMB Union.
Some £45.5m was forked out by central government on AWS services last year, with the Home Office the biggest spenders at £16m, HMRC coming next and the Department of Work and Pensions at £4m. The DWP payout is in part for hosting bits of the Universal Credit system, the GMB claimed.
The Brit tax collector – which El Reg exclusively revealed had ended its service agreement with UK minnow DataCentred in October 2017 in favour of AWS – received £1.7m in corporation tax from AWS on profits of £72.4m for calendar 2017. DataCentred subsequently went bust.
Amazon exec tells UK peers: No, we don’t want to be dominant. Also, we don’t fancy being taxed on revenues
“Amazon are taking us for mugs,” said Tim Roache, GMB general secretary. “They must quite literally be laughing all the way to the bank – they’re making a profit from government that they refuse to pay their fair share of taxes to.”
Government Digital Services initiated a public cloud first procurement policy in 2013, a strategy that suited Microsoft and AWS down to the ground. As of 2017, AWS was the fastest growing service provider to the UK government, according to TechMarketView.
The policy was drafted in part by Liam Maxwell, who at the time was CTO to HM Government and later became national tech advisor to UK.gov, before he joined AWS in a senior role in October 2018, as we exclusively revealed last summer.
Just last month, Alex Holmes, deputy director of cyber security at the Department for Culture, Media and Sport, was hired by AWS to work in its global public sector division.
Labour’s Rebecca Long-Bailey, Shadow Business Secretary, said: “It is shocking that the government has spent millions with a company that makes massive profits while mistreating its workers and paying barely any tax.”
The GMB and Labour Party also highlighted concerns over Amazon’s health and safety standards, with Freedom of Information requests showing 115 ambulances were called to the warehouse in Staffordshire over three years compared to eight by a nearby Tesco warehouse of the same size.
Chin up, SMEs. You might get crumbs from Big Tech tax clampdown – UK MPs
“We will clamp down on tax avoidance and evasion, and implement our Tax Transparency and Enforcement Programme to build an economy that works for the many, not the few,” Long-Bailey added.
Those with longer memories will recall the messy tech projects that Labour found itself having to correct at a cost to the taxpayer – looking at you, National Programme for IT. Botched projects have happened under both of the major political parties in the UK.
A spokeswoman for AWS sent us a statement: “The report from the GMB is misleading. Here are the facts. In line with the Treasury’s own guidance, public bodies have a responsibility to ensure that the services they procure from the private sector represent good value for money to the taxpayer, and that’s what they’ve found with AWS.
“Government departments using AWS are seeing a 40 per cent to 60 per cent cost saving. They could choose more expensive or less reliable options, but that would be a disservice to their constituents.”
Updated at 14.31BST to add
A spokesman for UK government sent us a statement:
“Our procurement decisions, including contracts with Amazon Web Services, are based on value for the taxpayer, capability, security and reliability of service.
“We also make sure that large businesses, like all other taxpayers, pay all the taxes due under UK law – there are no special deals and we don’t settle for less.” ®
No backdoor, no backdoor… you’re a backdoor! Huawei won’t spy for China or anyone else, exec tells MPs
‘If we were put under any pressure by any country that we felt was wrong, we would prefer to close the business’
The UK Parliament’s Science and Technology Select Committee yesterday asked experts whether Huawei poses a threat to national security. It was a question the answers to which exposed the many problems with trying to ban a manufacturer that’s been a part of the country’s telecommunications landscape for nearly two decades.
The main event involved the grilling of John Suffolk, Global Cyber Security and Privacy Officer at Huawei – and a former UK government CIO. Norman Lamb MP, chairman of the Commons select committee, kicked off the proceedings by asking the executive about Huawei’s involvement with governments that have records of corruption and human rights abuses – zeroing in on the government of the Xinjiang region of China, which is a customer of Huawei and has been widely reported to carry out illegal detention of Muslim citizens.
Suffolk replied that Huawei was operating in 170 countries, and was always following local laws, without “creating moral judgements.”
Lamb went as far as to claim Huawei was “complicit” in human rights violations, and, of course, the Chinese state was compared to Nazi Germany – you can’t escape Godwin’s law, even offline. Some of the other wonderful things mentioned, as the session went on, included gas chambers and the poisonous gas Zyklon B.
UK cautiously gives Huawei the nod for 5G network gear sales
Next, politicians went straight to the core of the Huawei question: whether it could resist potential attempts by the Chinese state to modify or backdoor its equipment so that it can be used to covertly spy on foreigners abroad.
“We’re quite clear, and it’s quite proven, we’re an independent company,” Suffolk answered. “No one can put us under pressure – we’ve made it very clear, regardless of who the country would be, if we were put under any pressure by any country that we felt was wrong, we would prefer to close the business.”
“That we felt was wrong” is an interesting caveat, we note: if Huawei felt the pressure was justified, would it be happy installing a backdoor? In any case, according to the Huawei man, the much-cited requirement to cooperate with Chinese secret services, and install backdoors in networking gear on demand, simply didn’t exist.
“There are no laws in China that obligate us to work with the Chinese government on anything whatsoever,” Suffolk continued. “We have looked at all of the Chinese laws: we have taken on board professors in Chinese law, and we had their views validated via Clifford Chance in London, and there is no requirement on us or any other company to undertake what you’re suggesting.
“We’ve had to go through a period of clarification with the Chinese government that have come out and made it quite clear that it’s not a requirement on any company.”
Suffolk said Huawei has never built any security holes into its software, but vulnerabilities in the equipment maker’s firmware have emerged, and required regular doses of patches – just like any other kind of software. He then explained the role of the Huawei Cyber Security Evaluation Centre (HCSEC) that attempts to squash the bugs in its software.
“Our model is this: we allow any country and any company to come and review and inspect our products,” said Suffolk. “Not because we expect them to find 100 per cent of the issues, because if we did that, we wouldn’t be in the telecommunications business, we would be in the software engineering business.
“Because we believe passionately that the more people are looking, the more people are inspecting and poking and prodding, the more chance you have to find something.
“We want people to find things – whether they find one thing or 100. We are not embarrassed by what people find. We stand naked in front of the world and it may not be a pretty sight most of the time, but we would prefer to do that because it enables us to improve our products.”
Suffolk also remarked on the complexities of the modern supply chain: “Only about 30 per cent of the components in a Huawei product are Huawei’s – the rest come from the global supply chain. We inspect that global supply chain, by coming in at manufacturing, taking them apart and we check. We are building in segregation of duties, so one person doesn’t have access to all of the products. We limit what engineers can do – so whenever we have a part of the process, we looked to build controls into everything we do, and HCSEC is one of those controls.”
In conclusion, he reiterated that Huawei “has never been asked by the Chinese government, or any other government, to do anything that might weaken security.”
A rinky tinky tinky
Something for the Weekend, Sir?
Access denied. Enter Access Code.
That’s a good start. Just a few moments ago I was handed a card on which is written, in blue ballpoint, a newly compiled string of alphanumerics that is supposed to identify me as a unique user. Oh well, maybe I fumbled the buttons. Let’s try again.
Access denied. Enter Access Code.
I am standing in the driving rain – this is London in the summer – in front of a large electronically operated vehicle barrier that keeps the riff-raff from getting anywhere near the car park and loading bay behind the building where I am to be working this week.
The vertical stainless steel keypad into which I am pushing my access code is weather-resistant. I am not. You’d think they could have installed the keypad at car-window level but no, it’s at lorry level. And it’s not on the driver’s side anyway, so anyone not rolling up in an unmodified US or continental import vehicle is forced to exit and walk over to the access terminal.
Access denied. Enter Access Code.
As far as it is concerned, I am riff-raff. I look behind me to see a steel-grey car has pulled up behind mine. Steel-grey = bland, unimaginative, company car, must be management. As I trudge back towards the street entrance around the corner to ask the security desk for an alternative access code, remembering this time to express an explicit preference for one that actually provides access, I notice the driver in the grey car has started to harrumph.
Security systems like this exist to protect me and my possessions, whether physical or electronic. They keep out the nasties and foil the mischievous. They allow access to the honest and prevent it to the unauthorised.
They are a pain in the arse.
Security is essential, of course, but only for other people. Not me. I’m the nice guy here and this sodding keypad is stopping me from getting in.
But then security authentication is one of those functions whose philosophical concept is hampered by self-contradictory details of its own design. To pick a topical example, it is the right of European Union citizens to enjoy free movement between EU countries without being stopped by border controls. However, how can the border controls know whether you are an EU citizen or not unless they stop you to ask for your EU identification? So it’s only by presenting your passport or ID card that you can exercise your right not to have to present your passport or ID card.
The forces of law and order, from police to night club bouncers, face the same recursive logic. Why do they insist on frisking me? Why can’t they concentrate their stop and search efforts only on those who are carrying concealed weapons?
As they say, there is a fine balancing act between adequate security and easy user experience. My cat has it easy: he was chipped at the rescue centre when we acquired him, and now he just wanders in and out of the house via a cat-flap that unlocks only when it detects his unique code.
The system also allows my cat to entertain himself by sitting indoors, looking though the clear plastic flap and waiting for other cats to come near. When they do, he leans forward so that the electronic detector unclicks the flap, daring the other cat to enter, then chuckles to himself as the potential intruder bashes its head on the door just as it locks itself again automatically.
Mind you, any electronic system has its failings. In the case of the cat-flap, it’s the need to change the batteries. They always seem to run out at 3am on the morning that we’re setting off on holiday and I end up having to race around the neighbourhood hunting for all-night petrol stations that can sell me eight AAs.
Batteries aside, what makes it so consistently reliable for my cat, and only my cat, to come and go without interference is partly the system’s ease of use: his ID is surgically inserted in the scruff of his neck. This kind of tech isn’t exclusive to feline operatives. Employees working in security-critical environments have been known to get chipped in the fleshy bit between thumb and forefinger, allowing them to open electronically locked doors by gesturing an Air Wank.
I did say “partly”. The challenge with digital security systems is that they are fluid and programmable, therefore re-programmable or liable to interference by unwanted external forces. The only reason it works brilliantly for my cat is that the other cats in my neighbourhood don’t have any programming skills. This isn’t the case for humans. For us, whatever security system you roll out has to be protected by additional levels of alternative security, and so the ease-of-use aspect quickly evaporates.
One method that is slowly gaining momentum is ground-level invisibility. If you don’t want social media giants to slurp and misuse your personal data, don’t give them any to start with. For many of us, it’s a bit late to wipe clean our muddy online footprints without expert help but, to mix a clothing metaphor, the sooner you zip up the better.
To my mind, like the first rule of Fight Club, anyone who blogs about IT security is stumbling at the first hurdle. It’s another of those contradictions in data security culture that talking about security in public is likely to make yourself a target and therefore less secure, and you can’t blame the rest of us for questioning your expertise and motives. It’s a bit like horoscope writers who consistently fail to win the Lottery or get-rich-quick life coaches who still aren’t rich enough to stop being a get-rich-quick life coach.
Returning to my car with the time-honoured advice “Try it again now” still ringing in my ears as rainwater dribbles down my neck, I see several more cars are queueing behind the grey one, waiting for mine to make way at the front. It is a harrumphing convention but nobody risks stepping out into the rain to assist. Righty, let’s give it a go.
Access denied. Alarm On.
Ooh, that’s a new one. Perhaps I’m getting somewhere. One more try?
Access denied. Commencing Lockdown.
A pair of amber lights illuminate and begin swirling dramatically through the driving rain. A rolling steel shutter shuts off the entrance with a metallic scream. It’s like I’m inside a Ridley Scott movie.
Enter 2FA Code. Press ? For Help.
I oblige and spend the next 10 minutes reading instructions in a 13-character LCD strip above the keypad on how to register myself online as a new user at a website that requires me to override a security warning just to see it, only to discover that I must update Google Authenticator before being asked to point my phone’s camera at the QR code that is now showing on my phone’s display.
The rainstorm intensifies but, hey, look on the bright side: I can no longer hear the harrumphing. It is being drowned out by the honking of car horns.
Oh to be a cat.
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He would like to apologise to readers who may recently have lost a loved one in a freak car park barrier accident. He also apologises for failing to warn readers that this week’s column features some strong language and flashing images. @alidabbs
Crime doesn’t pay? Crime doesn’t do secure coding, either: Akamai bug-hunters find hijack hole in bank phishing kit
Absolutely criminal behavior – unrestricted file upload, really?
Exclusive Phishing kits – used by miscreants to build webpages that steal victims’ personal information and money by masquerading as legit websites – harbor vulnerabilities that can be exploited by other miscreants to pilfer freshly stolen data.
It’s not far off burglars breaking into a mafia den to steal loot swiped just hours earlier from a jewelry store.
And while it’s not unknown for software developed by criminals for criminals to be buggy and exploitable, proof of such bungling comes this week from researchers at Akamai who have been studying crimeware for vulnerabilities. They’ve found holes in installations of phishing kits that allow other hackers to sneak in and commandeer operations.
Phishing kits are typically bought or otherwise obtained by criminals to build webpages that are designed to look and function exactly like a legit website, such as a bank, in order to fool marks into typing in their usernames and passwords to login or hand over personal information, such as driving license or passport scans.
These bogus webpages collect this cyber-booty and pass it along to its masters, and are usually installed on hacked websites for a while, with links spammed out to victims as phishing emails. The key thing, for the crooks, is that the emails and webpages look as genuine as possible.
Akamai senior security researcher Larry Cashdollar, with the help of colleague and researcher Steve Ragan, have found a bunch of phishing kits – particularly those that invite victims to upload files – with classic security vulnerabilities that can be exploited by hackers to take over the installation. That means sites belonging to small businesses, government departments, and so on, that have been compromised to host these phishing pages can wind up being hacked a second time by opportunist thieves seeking to swipe victims’ information for themselves once all the luring emails have been sent out.
“The real risk and concern in this situation goes to the victims: the server administrators, bloggers, and small business owners whose websites are where phishing kits like these are uploaded,” said Cashdollar in a research memo shared with The Register ahead of publication.
“They’re getting hit twice and completely unaware of the serious risk these phishing kits represent.
“While Akamai hasn’t determined if there have been successful secondary attacks due to these vulnerabilities, it’s a real possibility. Many phishing kit developers have a background in application security, and chase bugs like these for money and notoriety. The idea that they would search for, discover, and exploit such flaws for their own gain isn’t a stretch.”
Hacker dishes advanced phishing kit to hook clever staff in 10 mins
Ragan told El Reg the vulnerable kits studied were observed being used by miscreants to impersonate “two known commercial banks, a file storage and sharing service, and one online company that deals with payments,” with at least one of them promoted via phishing emails.
These kits used insecure 2017-era source code lifted from a GitHub repository to implement file uploads: people would be enticed into handing over to fraudsters scans of sensitive documents and similar data via these web forms. However, the code behind the forms performed no security checks nor input sanitization, meaning it is possible to upload code to the web server hosting the phishing kit via these forms, such as a PHP webshell, and then open it in your browser to start running it. To open it, you’ll need to figure out the resulting URL for the uploaded file, which shouldn’t be too hard.
At that point, you now, hopefully, have code execution within the phishing site’s environment, with no authentication or passwords needed, and you can launch whatever commands and cron-scheduled scripts you like as the web server process. From there, you can try to elevate your privileges, or simply snoop on victims hitting the phishing site. Most sites hacked to host phishing pages have lax security, making all of this possible.
“These vulnerabilities are exploitable during the upload process, which is where the kit will ask the victim to upload pictures of their IDs, bank card, etc,” Ragan explained. “So if you’re on a domain, find one of these kits, and get to the upload stage, you can instead send a shell as there are no checks with regard to file type.”
Specifically, the kits featured insecure PHP scripts named class.uploader.php, ajax_upload_file.php, and ajax_remove_file.php.
“A user could upload executable code to the web root. If the upload path doesn’t already exist, the uploader class file will create it,” as Cashdollar put it in his research note.
“The code in the file remove script doesn’t sanitize user input from ‘..’ allowing directory traversal, enabling a user to delete arbitrary files from the system if they’re owned by HTTPd. Code cloning and copying is as common in the criminal world as it is in traditional, legitimate application development.
“Server security configuration is rarely hardened, and often file permissions are left wide open allowing full read and write access to directories. Attackers compromising these kits using this vulnerability could gain additional footholds on the web server. One PHP shell and an improperly secured script ran by cron is all an attacker needs to take over the whole server.”
By the time you read this, Akamai should have more details up online over here. ®
Upgraded its systems after attack in early ’18, just enough to detect attack in late ’18
The Australian National University (ANU) today copped to a fresh breach in which intruders gained access to “significant amounts” of data stretching back 19 years.
The top-ranked Oz uni said it noticed about a fortnight ago that hackers had got their claws on staff, visitor and student data, including names, addresses, dates of birth, phone numbers, personal email addresses, emergency contact details, tax file numbers, payroll information, bank account details and passport details. It said the breach took place in “late 2018” – the same year it ‘fessed up to another lengthy attack.
Students will be miffed to find out that someone knows they had to retake second-year Statistics since academic records were also accessed.
The uni insisted: “The systems that store credit card details, travel information, medical records, police checks, workers’ compensation, vehicle registration numbers, and some performance records have not been affected.”
The news comes less than a year after the Canberra-based uni admitted its networks had been hit by a months-long attack, which many in the country’s media theorised had originated in China – a claim the People’s Republic strenuously denied. At the time, ANU said it had “been working in partnership with Australian government agencies for several months” to fend off the attack.
In a statement released today, the institution’s vice-chancellor, Brian Schmidt, admitted that if the uni had not made those upgrades last year in the wake of the early 2018 attacks, this breach would have gone undetected.
He said: “As you know, this is not the first time we have been targeted. Following the incident reported last year, we undertook a range of upgrades to our systems to better protect our data. Had it not been for those upgrades, we would not have detected this incident.”
Schmidt described the attacker as a “sophisticated operator” and said the uni had “no evidence that research work has been affected”.
The uni is home to the ANU Research School of Astronomy and Astrophysics and operates the country’s largest optical observatory. Among other things, it houses the SkyMapper project, which is robotically creating the “first comprehensive digital survey of the entire southern sky” and has been releasing the data set on the internet.
Interview: AARNet’s Peter Elford on Australia’s national research infrastructure
Boffins at the uni are still looking for human eyeballs to grok Planet 9, the theorised but undiscovered planet beyond Pluto, in images released by the project. Those interested can seek it or other objects at our solar system’s edges here.
ANU is also home to iTelescope.Net, which looks after a network of internet-connected public telescopes popular among amateur and semi-professional astronomers across the globe.
The place is ranked 24th in the QS World University Rankings, but has a strong academic reputation. According to the rankings, it has more citations per faculty member than Cambridge.
The vice-chancellor, who chummily signed off as “Brian”, said:
For the past two weeks, our staff have been working tirelessly to further strengthen our systems against secondary or opportunistic attacks. I’m now able to provide you with the details of what occurred.
We believe there was unauthorised access to significant amounts of personal staff, student and visitor data extending back 19 years.
Depending on the information you have provided to the University, this may include names, addresses, dates of birth, phone numbers, personal email addresses and emergency contact details, tax file numbers, payroll information, bank account details, and passport details. Student academic records were also accessed.
The University has taken immediate precautions to further strengthen our IT security and is working continuously to build on these precautions to reduce the risk of future intrusion.
The uni set up a direct phone and email help lines and increased its “counselling resources” for those affected.
Not to let us down, the outfit said it took the breach “extremely seriously” and had “profound regret”.
As the uni’s motto, Naturam Primum Cognoscere Rerum*, attests, above all, find out the “nature of things”. Perhaps the next upgrade will help it to actually fend off an attack. ®
* Derived from the Lucretius poem “De Rerum Natura” (book III, 1072)… the point of the poem was to explain Epicurean philosophy – moderation in everything – to a Roman audience.
Another regulator lines up to have a kick
Google shares took more than a 6 per cent tumble this morning as twitchy investors heard reports the US Department of Justice was about to launch a major investigation into the ad giant’s business practices.
Reports from “people familiar with the matter” suggest America’s Federal Trade Commission has played a role in preparations but is handing over the reins to the DoJ.
It is not clear which of Alphabet/Google’s many tentacles the DoJ is particularly interested in.
The FTC last probed Google back in 2013, but the commish cleared Google of biasing its search results, though the giant conceded that it would license mobile patents under FRAND rules and not fling them around in litigation against rivals.
Mountain View has battled regulators on many fronts in Europe, including investigations into its mobile operating system, online ad sales system and comparison shopping service as well as broader privacy probes.
I don’t hate US tech, snarls Euro monopoly watchdog chief – as Google slapped with €1.49bn megafine
The search and ad giant is still under investigation by the Irish data protection regulator and has already settled an EU case with a payment of €1.49bn to stop action being taken against its online ad broker system.
The Irish Data Protection Commissioner is following up a complaint that Google’s Doubleclick is in breach of European’s new data protection laws, the General Data Protection Regulation (GDPR).
The accusation centres on allegations that Google routinely leaks private data about users to its ad-matching service.
Google has also been forced to take restorative action to stop a probe into claims it unfairly favoured its own shopping comparison service over those provided by other companies.
And of course that followed a €4.3bn European Commission fine for abusing its Android payment last year to unfairly favour its own search services.
In the US, Google has had an easier ride from regulators – although some potential Democratic presidential candidates, like Elizabeth Warren, have suggested taking tough action against tech giants including Google.
Some Republicans have also complained that Google, Facebook, Twitter and other Silicon Valley darlings are guilty of anti-conservative bias. ®
More facial-recognition bans, new creeper tool links girlfriends to past porno, Microsoft’s AI school, and more
Plus machine systems can trounce humans at Quake III flag captures
Roundup Let’s get right to it: here’s your latest roundup of recent machine-learning related news beyond what we’ve already reported.
Cheap human labor is remote controlling Kiwibots: Food delivery machines known as Kiwibots may look dinky and sweet, trundling slowly across the campus of the University of California, Berkeley, to bring students food.
Their screens light up with a pair of eyes that can blink and wink, but they aren’t as smart as they make out. But underneath their cuteness is a team of humans, who work to keep the bots on track since they have no idea where they’re going.
Operators are outsourced from Colombia and have to figure out “waypoints” to trace a path for the Kiwibots to follow. They update the delivery robots with directions every five to ten seconds, and are paid less than $2 per hour, according to the San Francisco Chronicle. During that cheap hour operators can aid the robots in up to 15 trips, which – less face it – is cheaper than developing your own AI system.
The robots are decked out with GPS and the operators can see where they are on a street map, and a camera feed shows its local surroundings. It looks like Kiwibots use some sort of AI software to avoid collisions, but nothing that would help it navigate autonomously.
Kiwi is a 2017 startup based in Berkeley. It currently only delivers food around UC Berkeley and parts of the local surrounding neighbourhood, but has aspirations to spread its Kiwibots to other college campuses across America.
Check if your girlfriend has been in a porno with AI. Just eeewww: A Chinese techie living in Germany has set tongues wagging after he claimed to have developed a facial recognition system that can link XXX-movie actresses’ faces to social media profile selfies.
He posted the announcement on the Chinese social media platform Weibo anonymously, which was spotted by a Stanford PhD student.
A Germany-based Chinese programmer said he and some friends have identified 100k porn actresses from around the world, cross-referencing faces in porn videos with social media profile pictures. The goal is to help others check whether their girlfriends ever acted in those films. pic.twitter.com/TOuUBTqXOP
— Yiqin Fu (@yiqinfu) May 28, 2019
The tool is to check whether your girlfriend has featured in an adult flick, apparently. Unsurprisingly, the reactions have been mixed. Some are excited, and some are disgusted. Besides all the serious ethical and legal questions, there are also technical ones too.
How would such a system work? The developer claims to have scraped over 100TB of data from various porn sites and a machine learning model can use this to cross examine profile pictures from social media platforms like Facebook, Instagram, or Weibo. It sounds as though, in order for something like this to be effective, you have to build a massive database of images from porn videos.
There’s reason to be skeptical even if the developer claims to have identified over 100,000 porn actresses. What happens if the image quality is poor? Or if there isn’t much data to go on in the videos or from social media profiles? Can something like this really be scaled up across all adult clips? There are already so many technical issues with facial recognition.
Initially, he didnt’t seem to feel bad about creating such an abhorrent tool. He said he didn’t share any data or the database for people to use and that sex work is legal in Germany, where he lives. Also, he’d be up for building a tool to scrutinize male porn stars too, although after the backlash he appears to have given up on the scheme.
But the anonymous coder has since apologized for the tool, and has, apparently, deleted all the data and discontinued the project, according to MIT Tech Review.
AI bots can play Capture the Flag mode in Quake III: Researchers at DeepMind have trained a team of machine learning agents to play cooperatively with each other in a game of Capture The Flag in Quake III Arena.
Capture The Flag is a popular game in first-person shooters. The goal is to capture the opponent team’s flag and get it to your base, while protecting your own. It requires teamwork, something that is learnt by the bots over time. They pick up on certain behaviors, such as defending their home base, camping around their opponent’s base and following their teammates around the map.
The researchers don’t explicitly code in any hard rules, and the only reward signal is whether the team has won or lost. The bots were trained using reinforcement learning over 450,000 games. After this, they were pitted against 25 humans in 100 games. The humans only won about 30 per cent of the time, according to the results published in a Science paper this week.
Multi-agent training is pretty cool and all, but when is it going to extend to something actually useful in the real world, eh? Still, DeepMind is adamant that it will have potential one day.
“In general, this work highlights the potential of multi-agent training to advance the development of artificial intelligence: exploiting the natural curriculum provided by multi-agent training, and forcing the development of robust agents that can even team up with humans,” it said.
Microsoft’s AI Business School is open to, erm, government agencies now!: Remember Microsoft’s free course AI Business School? No? Well, okay, it’s an online series to teach leadership skills that apply to AI and the big bad world of business. Microsoft has now just added some lessons for people working in government.
These include a lecture on how officials can identify opportunities to use AI, and two case studies in how technology is being used to develop smart cities in Finland and chatbots for government websites.
“Leaders in the public sector are often faced with unique challenges when considering how to apply AI to improve the speed and quality of the government services they offer their citizens,” said Mitra Azizirad, corporate vice president for Microsoft AI marketing.
“The opportunities and scenarios for AI in the public sector are ever increasing, which can make deciding where and how to apply it quite daunting. This is precisely why we expanded Microsoft’s AI Business School to now include a specifically tailored and targeted public sector curriculum to help these leaders address their citizens’ unique needs.”
If any of that sounds remotely interesting to you at all, then here’s a link to all the classes.
Michigan may be the next city to ban facial recognition for law enforcement: San Francisco was the first city to do it, and now Michigan may be the first state to ban the technology.
State Senator Peter Lucido, working for the Republican Party in Lansing, Michigan, drafted a bill that would prevent law enforcement from using facial recognition technology.
“A law enforcement official shall not obtain, access, or use any face recognition technology or any information obtained from the use of face recognition technology to enforce the laws of this state or a political subdivision of this state,” according to the bill introduced this week.
It also says that any evidence or search and arrest warrants obtained through the use of the technology would be unconstitutional, violating the Fourth Amendment.
It’s very early days yet, and the bill will have to pass through various committees before it can be overseen by the Senate and House of Representatives. Keep your eyes peeled. ®
Brit spies’ idea would backdoor WhatsApp et al without breaking the crypto
Bruce Schneier, Richard Stallman and a host of western tech companies including Microsoft and WhatsApp are pushing back hard against GCHQ proposals that to add a “ghost user” to encrypted messaging services.
The point of that “ghost user”, as we reported back in 2018 when this was first floated in its current form, is to apply “virtual crocodile clips” and enable surveillance by spies, police, NHS workers and any others from the long list of state organisations allowed to snoop on your day-to-day life.
“Although the GCHQ officials claim that ‘you don’t even have to touch the encryption’ to implement their plan, the ‘ghost’ proposal would pose serious threats to cybersecurity and thereby also threaten fundamental human rights, including privacy and free expression,” said a letter (PDF, 9 pages, 300kB) signed by around 50 prominent individuals and organisations.
Those signatories include the aformentioned luminaries and tech firms as well as Apple, the Tor Project, pro-freedom pressure and lobby groups such as the Electronic Frontier Foundation, Big Brother Watch, Liberty, Privacy International and more.
“In particular,” the letter said, “the ghost proposal would create digital security risks by undermining authentication systems, by introducing potential unintentional vulnerabilities, and by creating new risks of abuse or misuse of systems.”
The thrust of the letter is not that the method is technically unviable; rather, it argues that “loss of trust” in communications services would have a range of negative effects, both predictable and unpredictable. Not only that, it also warns that introducing this backdoor through software updates (how else?) would cause users to simply stop installing privacy-killing updates from manufacturers, with the attendant security risks:
“Individual users aware of the risk of remote access to their devices could also choose to turn off software updates, rendering their devices significantly less secure as time passed and vulnerabilities were discovered [but] not patched.”
The missive also warned that Britain’s lax surveillance laws could see the proposal implemented anyway without the public knowing, thanks to what it described as “the power to impose broad non-disclosure agreements that would prevent service providers from even acknowledging they had received a demand to change their systems, let alone the extent to which they complied”.
For his part, Ian Levy, the National Cyber Security Centre co-author of the original GCHQ proposal, said in a statement:
“We welcome this response to our request for thoughts on exceptional access to data – for example to stop terrorists. The hypothetical proposal was always intended as a starting point for discussion. We will continue to engage with interested parties and look forward to having an open discussion to reach the best solutions possible.”
In his original proposal, Levy had rather optimistically hoped that the discussions could happen “without people being vilified for having a point of view or daring to work on this as a problem”. In the post-Snowden environment, and in light of various revelations and disclosures about what British spies get up to, it’s not easy for the agencies to build the public trust they’re hoping for.
Jake Moore, a security specialist from infosec biz ESET, opined: “This makes a mockery of the fundamental basics of encryption. Not only is it going against what privacy is all about: if you create a backdoor for the good guys, the bad guys won’t be far behind.”
The letter was also copied to audit agency the Investigatory Powers Commissioner’s Office (IPCO). Billed publicly as the regulator of surveillance in the UK, IPCO mostly trawls through spies’ logs of who they spied on, after the event. ®
OEMs toe the line for that sweet, sweet marketing moolah
This week at Computex in Taiwan, Chipzilla finally shared the specific details about Project Athena – its valiant attempt to tell PC makers how to do their job.
This “innovation program” [PDF] focuses on laptop design and choice of components, but not the components made by Intel. Instead, it aims to create specifications for the bits of a system that impact battery life, startup time, compatibility with modern machine learning systems and cyber security.
By doing this, the chip maker is no doubt hoping to breathe new life into the expensive “ultrabook” device category, which would translate into shifting larger numbers of expensive chips.
The existence of Project Athena was officially confirmed in January at CES, and Intel said the qualifying laptops would mostly be based on Ice Lake, its perpetually-delayed, low power 10nm processor family.
Before sharing any details, Chipzilla announced that laptops certified with the program would appear in the second half of the year across both Windows and Chrome devices – even though the two architectures couldn’t be more different.
This, and the deluge of buzzwords like 5G and artificial intelligence firmly established Athena as yet another marketing exercise – although one with participation from Acer, Asus, Dell, Lenovo, HP, Samsung, Quanta and other businesses whose bottom line depends on PC sales.
Even the video produced for the occasion featured somebody playing Anthem (the video game) with a wireless PlayStation 3 controller on PC – something that’s not technically possible.
As part of Athena, Intel promised an annual review outlining platform requirements, benchmarking targets defined by real-world usage models, co-engineering support, and certification.
Intel also outlined some of the hoops vendors would need to jump through to get certified with Athena.
The 1.0 target specification is based around KEIs, or “key experience indicators”. These (obviously) include having a Core i5 or i7 processor, at least 8GB of RAM and more than 256GB of SSD storage.
Intel CPU shortages + consumer stock bottleneck = no computer sales growth in EMEA for 2019
They also include a “consistent responsiveness on battery” – i.e. the laptop doesn’t care if it’s plugged in. In practical terms that mean at least 16 hours of battery life in local video playback mode and at least nine hours under real-world performance conditions, and system wake from sleep in less than a second.
It will come as no surprise that most of the platform-level requirements mandated for Athena are simply 10th-gen Core CPU features – like integrated Thunderbolt 3 and Wi-Fi 6 support.
The first laptops to support Athena are Acer Swift 5, Dell XPS 13, HP Envy 13 and Lenovo Yoga S940.
The entire project reminds us of the cringeworthy “PC Does Whaaat?” advertising campaign that Intel cooked up with Microsoft, Dell, Lenovo and HPE back in 2015, as it was trying to convince the market that old laptops are no longer fit for purpose. It wasn’t received well. ®
Truth, Justice, and the American Huawei: Chinese tech giant tries to convince US court ban is unconstitutional
They think it’s a level playing field. How sweet
Huawei is trying to have a key part of American lawfare against the Chinese company thrown out by a US court – on the grounds it breaks the United States constitution.
The telecoms kit manufacturer is arguing that section 889 of the National Defence Authorisation Act (NDAA) 2019 is unconstitutional under US law because it singles out a single entity – Huawei itself – for “trial by legislature”.
Song Liuping, Huawei’s chief legal officer, said at a press conference earlier today: “The fact is, the US government has provided no evidence to show that Huawei is a security threat. There is no gun, no smoke. Only speculation.”
The NDAA bans US government agencies and contractors from contracting with Huawei themselves. Given that most large companies end up chasing government business sooner or later, this is a de facto ban on Huawei equipment.
IEEE tells contributors with links to Chinese corp: Don’t let the door hit you on Huawei out
Liuping also said that Huawei believes “that US politicians are using cyber security as an excuse to gain public support for actions that are designed to achieve other goals” – a clear reference to the ongoing US-China trade war.
Huawei claims that the various US laws and sanctions against it amount to punishment without trial in a method that does not allow it to present a proper defence. It has filed a motion for summary judgment in its favour in the US District Court for Eastern Texas, which is scheduled to be heard on 19 September this year.
The Chinese gear maker also warned the US sanctions would “harm more than 1,200 US suppliers” in an unsubtle move to remind the US that trade tariffs also affect domestic industries as well as foreign businesses. ®
Just a 16% chance of being banged up for computer misuse
Analysis Nearly 90 per cent of hacking prosecutions in the UK last year resulted in convictions, though the odds of dodging prison remain high, an analysis by The Register has revealed.
Government data from the last 11 years revealed the full extent of police activity against cybercrime, with the number of prosecutions and cautions for hacking and similar offences being relatively low.
Figures from HM Courts and Tribunals Service revealed there were a total of 422 prosecutions brought under the Computer Misuse Act 1990 (CMA) over the last decade, with the figure rising to 441 including the year 2007.
Criminals convicted of CMA offences were quite likely to avoid prison in 2018, with just nine (including young offenders sent to youth prisons) receiving custodial sentences out of 45 convictions. Among those were Mustafa Ahmet Kasim, the first person ever to be prosecuted under the CMA by the Information Commissioner’s Office. A further dozen CMA convicts received suspended sentences in 2018.
Between 2008 and 2018, 79 people – 24 per cent of the total prosecuted in that period – were found not guilty at court or otherwise had their cases halted. Of the guilty, 16 per cent were given immediate custodial sentences. That number comes up to 45 per cent if suspended sentences are included.
The CMA is the main statute used to prosecute hackers, as well as some data-related crimes such as securing unlawful access to computers and their contents.
The odds of getting off with a police caution instead of a full-blown prosecution for a CMA offence were exactly 50:50 in 2018, with 51 cautions being issued as well as 51 criminal court cases. In those 51 prosecutions, 45 defendants were found guilty, a rate of around 90 per cent – slightly above the usual average across all criminal offences of around 75-80 per cent of prosecutions resulting in a guilty verdict.
The 2013 jump in prosecutions could be explained by that being the statistical year after Theresa May, as Home Secretary, withdrew her extradition order against accused hacker Gary McKinnon, signalling a greater willingness to prosecute at home rather than extradite.
Among the lucky six to be found not guilty in 2018 or who otherwise had their cases stopped was Crown court judge Karen Jane Holt, aka Karen Smith, whose prosecution under the Computer Misuse Act was halted by order of another Crown court judge.
The most common range of fines fell between £300 and £500, with one criminal having been fined more than £10,000 last year – the only one to be so punished since 2012. In general, around five fines were issued per year for the last 11 years.
In the 11 years’ worth of data analysed by The Register, just one person got away free with an absolute discharge from court (in 2017) after being found guilty. A maximum of six people per year received conditional discharges, with last year featuring just three. Community sentences accounted for a total of 95 disposals from court since 2007, with 15 of those having been handed out last year.
Don’t worry about rotting behind bars
Even when a prison sentence was handed down by judges, the duration was relatively short. Over the past decade, the most frequent sentence lengths fell between 6-9 months and 18-24 months. Current UK sentencing laws automatically halve prison sentences in favour of release on licence, with release from prison usually being a bit earlier again than half the headline figure, as a criminal barrister explains on his website.
The figures could be interpreted to show that Britain is a relatively forgiving jurisdiction for computer hacking crimes, something this story advertising an IT security startup staffed with young grey hats may or may not bear out.
Not every CMA prosecution is started because of hacking, it is important to note, though the law is often used for hacking cases. In 2017, a former Harrods IT worker, Pardeep Parmar of Hitchin, pleaded guilty to a CMA offence after being let go from the posh department store and taking his work laptop to a local computer shop, asking to have it taken off their domain. Similarly, a former Santander bank clerk, Abiola Ajibade of Consort Road, Southwark, pleaded guilty after having been caught accessing and sending customers’ details to her then boyfriend.
“Cybercrime has become accepted as a low-risk, potentially high-reward activity for organised criminals. If they act professionally, they can make substantial sums of money with very little chance of being caught,” opined Richard Breavington of law firm RPC, which also obtained some of the data. ®
Immersive training covering ethical hacking to intrusion detection, and more, comes to UK capital this June
Promo IT security training specialist SANS Institute is bringing a major event to London this summer, offering a bumper programme of intensive courses designed to arm security professionals with the skills they need to defend against database breaches and malicious attacks.
Attendees have the chance to prepare for valuable GIAC certification and will be able to put their newfound knowledge to good use immediately. The event takes place from 3 to 8 June at the Grand Connaught Rooms, offering a range of ten courses for all levels. Attendees will also be able to test their competitive skills in the SANS CORE NetWars tournament.
Course topics include:
- Advanced penetration testing, exploit writing, and ethical hacking
- Designed for those with some penetration testing knowledge, the course takes students through dozens of real-world attacks. Discussion of given attacks is followed by exercises in a hands-on lab.
- Intrusion Detection In-Depth
- Mostly but not solely for security analysts. Learn to determine whether an intrusion detection system alert is noteworthy or a false indication. Daily hands-on exercises reinforce the material.
- Windows Forensic Analysis
- How to recover and analyse forensic data on Windows systems, track user activity on your network and organise findings for incident response, investigations or litigation.
Plus much more besides.
Check out the full agenda here.
In addition, SANS Institute has launched a new campaign in EMEA called Level Up to encourage people to test their cyber security knowledge and to help highlight the cyber security skills gap.
Starting with a short, fun test covering topics such as encryption, two-factor authentication, hashing, penetration testing, and incident response, Level Up aims to attract potential new cyber security professionals into the industry. It also aims to give existing industry professionals an idea of what skills they should look to develop next, and why it’s so important to keep them up to date.
The Level Up website also features videos and case studies of some of SANS’s instructors and top industry experts, talking about how they got into cyber security as a career, why it’s so important, and how they have developed their careers.
The next Level Up event takes place on 4 June at SANS London. That Tuesday night event isn’t just open to students attending the SANS event that week, it’s also open separately to interested parties who can sign up for free here.
British Army cyber ‘n’ psyops unit 77 Brigade can’t even brainwash civvies into helping it meet recruitment targets
The British Army’s psyops unit 77 Brigade is still falling short of recruiting targets, despite cyber skills being bigged up repeatedly by the military and government.
The unit – whose remit covers information operations, psyops and similar shady things – has continued its struggle to attract part-time recruits, according to figures released under the Freedom of Information Act.
Despite its target headcount having been increased from 448 to 474 people between January 2017 and mid-2018, an increase of 5.8 per cent, in June 2018 the unit had 340 on strength – a shortfall of 29 per cent, or 134 personnel.
Another way of looking at the stats is that the crafty tricks brigade grew their headcount by 64 over 18 months for both full-time and part-time personnel, albeit more slowly than they should have done.
Bored bloke takes control of British Army ‘psyops’ unit’s Twitter
Figures analysed by The Register show that the unit seems to have greater difficulty recruiting part-timers from the civilian world than it does in recruiting and keeping full-time soldiers.
While the numbers are an improvement over the 40 per cent shortfall that The Register reported in 2017, the continually missed targets reflect the British Army’s ongoing recruitment problems in general as well as the broader shortage of cyber security skills in the armed forces.
Breaking down the figures, 77 Brigade’s 2018 targets were to employ 203 full-timers and 271 part-timers to achieve its mission of being an “elite unit of hackers, propagandists and ne’er-do-wells who crawl social media to plant stories, influence opinion and generally manipulate things on behalf of government” as some crafty joker who hijacked their Twitter account summarised 77 Brigade’s purpose.
The unit was short of meeting both targets: it actually employed 190 full-timers and 150 reservist part-timers. We’ve put the relevant numbers into a table below.
The Ministry of Defence has been asked to comment.
77 Brigade forms one of the key parts of the armed forces that meets the government’s oft-trumpeted “offensive cyber” capability, as referenced over the past couple of days by both Defence Secretary Penny Mordaunt and Foreign Secretary Jeremy Hunt.
Part of the cause may be infamous outsourcing giant Capita, which handles all Army recruiting matters thanks to the disastrous outsourcing contract which continues to hobble the military. ®