Really interesting article by and interview with Paul M. Nakasone (Commander of U.S. Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service) in the current issue of Joint Forces Quarterly. He talks about the evolving role of US CyberCommand, and it’s new posture of “persistent engagement” using a “cyber-presistant force”:
From the article:
We must “defend forward” in cyberspace, as we do in the physical domains. Our naval forces do not defend by staying in port, and our airpower does not remain at airfields. They patrol the seas and skies to ensure they are positioned to defend our country before our borders are crossed. The same logic applies in cyberspace. Persistent engagement of our adversaries in cyberspace cannot be successful if our actions are limited to DOD networks. To defend critical military and national interests, our forces must operate against our enemies on their virtual territory as well. Shifting from a response outlook to a persistence force that defends forward moves our cyber capabilities out of their virtual garrisons, adopting a posture that matches the cyberspace operational environment.
From the interview:
As we think about cyberspace, we should agree on a few foundational concepts. First, our nation is in constant contact with its adversaries; we’re not waiting for adversaries to come to us. Our adversaries understand this, and they are always working to improve that contact. Second, our security is challenged in cyberspace. We have to actively defend; we have to conduct reconnaissance; we have to understand where our adversary is and his capabilities; and we have to understand their intent. Third, superiority in cyberspace is temporary; we may achieve it for a period of time, but it’s ephemeral. That’s why we must operate continuously to seize and maintain the initiative in the face of persistent threats. Why do the threats persist in cyberspace? They persist because the barriers to entry are low and the capabilities are rapidly available and can be easily repurposed. Fourth, in this domain, the advantage favors those who have initiative. If we want to have an advantage in cyberspace, we have to actively work to either improve our defenses, create new accesses, or upgrade our capabilities. This is a domain that requires constant action because we’re going to get reactions from our adversary.
Persistent engagement is the concept that states we are in constant contact with our adversaries in cyberspace, and success is determined by how we enable and act. In persistent engagement, we enable other interagency partners. Whether it’s the FBI or DHS, we enable them with information or intelligence to share with elements of the CIKR [critical infrastructure and key resources] or with select private-sector companies. The recent midterm elections is an example of how we enabled our partners. As part of the Russia Small Group, USCYBERCOM and the National Security Agency [NSA] enabled the FBI and DHS to prevent interference and influence operations aimed at our political processes. Enabling our partners is two-thirds of persistent engagement. The other third rests with our ability to act — that is, how we act against our adversaries in cyberspace. Acting includes defending forward. How do we warn, how do we influence our adversaries, how do we position ourselves in case we have to achieve outcomes in the future? Acting is the concept of operating outside our borders, being outside our networks, to ensure that we understand what our adversaries are doing. If we find ourselves defending inside our own networks, we have lost the initiative and the advantage.
The concept of persistent engagement has to be teamed with “persistent presence” and “persistent innovation.” Persistent presence is what the Intelligence Community is able to provide us to better understand and track our adversaries in cyberspace. The other piece is persistent innovation. In the last couple of years, we have learned that capabilities rapidly change; accesses are tenuous; and tools, techniques, and tradecraft must evolve to keep pace with our adversaries. We rely on operational structures that are enabled with the rapid development of capabilities. Let me offer an example regarding the need for rapid change in technologies. Compare the air and cyberspace domains. Weapons like JDAMs [Joint Direct Attack Munitions] are an important armament for air operations. How long are those JDAMs good for? Perhaps 5, 10, or 15 years, some-times longer given the adversary. When we buy a capability or tool for cyberspace…we rarely get a prolonged use we can measure in years. Our capabilities rarely last 6 months, let alone 6 years. This is a big difference in two important domains of future conflict. Thus, we will need formations that have ready access to developers.
Solely from a military perspective, these are obviously the right things to be doing. From a societal perspective — from the perspective a potential arms race — I’m much less sure. I’m also worried about the singular focus on nation-state actors in an environment where capabilities diffuse so quickly. But CyberCommand’s job is not cybersecurity and resilience.
The whole thing is worth reading, regardless of whether you agree or disagree.
The police are increasingly getting search warrants for information about all cellphones in a certain location at a certain time:
Police departments across the country have been knocking at Google’s door for at least the last two years with warrants to tap into the company’s extensive stores of cellphone location data. Known as “reverse location search warrants,” these legal mandates allow law enforcement to sweep up the coordinates and movements of every cellphone in a broad area. The police can then check to see if any of the phones came close to the crime scene. In doing so, however, the police can end up not only fishing for a suspect, but also gathering the location data of potentially hundreds (or thousands) of innocent people. There have only been anecdotal reports of reverse location searches, so it’s unclear how widespread the practice is, but privacy advocates worry that Google’s data will eventually allow more and more departments to conduct indiscriminate searches.
Of course, it’s not just Google who can provide this information.
I spend a lot of time talking about this sort of thing in Data and Goliath. Once you have everyone under surveillance all the time, many things are possible.
Ransomware has been making a lot of splashy headlines over recent years with high profile attacks, such as WannaCry and NotPetya, dominating the news in large-scale breaches. While these massive breaches are certainly terrifying, the more common attacks are actually being inflicted across much smaller businesses, though on a large scale.
Large enterprises have substantial IT resources and dedicated security teams working to protect them; therefore, they are more likely to survive an attack or prevent one from happening before any damage is done. Overall business detection of malware rose by 79% in 2018, with major ransomware exploits SamSam and GandCrab targeting smaller organizations like hospitals, city services departments and consumer networks.
SMBs – Ideal Ransomware Targets
Smaller businesses may think that these attacks aren’t relevant to them. However, that would be far from the truth. SMBs tend to make ideal targets for cyber criminals because hackers are well aware that SMBs frequently lack the security that enterprises take seriously. Today, we are seeing more and more non-enterprise organizations being targeted with ransomware, since they house a lot of valuable, private data.
These SMBs are being approached in increasingly sophisticated ways, with phishing attacks being the most common attack vector for ransomware. While the traditional phishing email will try to trick users into providing personal and banking information, hackers are using less obvious phishing emails and more targeted spear phishing emails, as well as turning to social engineering and browser extensions to hide malicious code that will infect a user’s computer, which in turn infects the network it is connected to. For SMBs that do not have the IT expertise or a proper spam/phishing blocking solution in place, this can be a costly lesson to learn, and, in extreme cases, can ruin a business. Employees’ behavior can exacerbate the issue since SMBs often lack the resources to properly train them to understand what a phishing or malicious email looks like; this ignorance can inadvertently cause significant destruction for the business.
How Can Small Businesses Protect Themselves?
Have a clear, defined and regularly updated cybersecurity strategy. This means covering all points of entry and having an end-to-end solution on your network.
- Protect the network at the gateway, with a next-generation firewallsolution, which block spam, viruses, phishing and malware before they ever reach employees and theirdevices.
- Protect endpoints and ensure each endpoint has a security solution installed and regularly updated.
- Assign owners to check and update your security, especially if you are unable to hire dedicated IT or security staff.
- Back up data regularly to a safe source, preferably both onsite and offsite or in the cloud. If you have multiple copies of your data, you can recover via backup without having to worry about paying the ransom in the event of an attack.
- Arm yourself with information, and learn to spot suspicious websites, links, browser extensions and emails. Educating employees to not click on suspicious emails, or open attachments from unknown users, is a critical part of cybersecurity hygiene.
- Consider ransomware insurance, which has been growing in popularity in recent years.
- Lock down administrative rights, and keep systems and apps up to date with the latest patches to ensure vulnerabilities are not exploited.
Step to Take if Ransomware Does Make it onto Your Network
This may seem counterintuitive, but don’t pay the ransom. Paying a cyber criminal doesn’t guarantee the recovery of your files, and many of the SMBs who have paid ransoms have reported being unable to recover data. If you are a victim of a malicious attack, ransomware or otherwise, it is important to lock down the network and devices to ensure it cannot spread further. Using powerful anti-malware solutions can help to identify and remove the ransomware. If you have backups, you can restore the data and systems that have been affected without paying the ransom.
Ransomware works; that’s why hackers keep honing their techniques. SMBs need to be especially careful when it comes to cybersecurity and should work with vendors that understand their unique security needs. The most important thing SMBs can do is protect the network at the gateway to keep ransomware from ever reaching users. Having safe, secure backups of information is like an insurance policy to provide access to critical data in the event of an attack. Last but not least, education is critical for users to understand threats, and IT personnel to deploy the proper defenses against them.
About the author: Timur Kovalev serves as the CTO at Untangle and is responsible for driving technology innovation and integration of gateway, endpoint, and cloud technologies. Timur brings over 20 years of experience across various technology stacks and applications. Prior to joining Untangle, Timur headed up Client and Threat Intelligence Technology at Webroot, where he led development of desktop and mobile solutions, cloud intelligence services, and research automation systems.
Copyright 2010 Respective Author at Infosec Island
Interesting — although short and not very detailed — article about Estonia’s volunteer cyber-defense militia.
Padar’s militia of amateur IT workers, economists, lawyers, and other white-hat types are grouped in the city of Tartu, about 65 miles from the Russian border, and in the capital, Tallinn, about twice as far from it. The volunteers, who’ve inspired a handful of similar operations around the world, are readying themselves to defend against the kind of sustained digital attack that could cause mass service outages at hospitals, banks, and military bases, and with other critical operations, including voting systems. Officially, the team is part of Estonia’s 26,000-strong national guard, the Defense League.
Formally established in 2011, Padar’s unit mostly runs on about €150,000 ($172,000) in annual state funding, plus salaries for him and four colleagues. (If that sounds paltry, remember that the country’s median annual income is about €12,000.) Some volunteers oversee a website that calls out Russian propaganda posing as news directed at Estonians in Estonian, Russian, English, and German. Other members recently conducted forensic analysis on an attack against a military system, while yet others searched for signs of a broader campaign after discovering vulnerabilities in the country’s electronic ID cards, which citizens use to check bank and medical records and to vote. (The team says it didn’t find anything, and the security flaws were quickly patched.)
Mostly, the volunteers run weekend drills with troops, doctors, customs and tax agents, air traffic controllers, and water and power officials. “Somehow, this model is based on enthusiasm,” says Andrus Ansip, who was prime minister during the 2007 attack and now oversees digital affairs for the European Commission. To gauge officials’ responses to realistic attacks, the unit might send out emails with sketchy links or drop infected USB sticks to see if someone takes the bait.
The Trump Administration has released a comprehensive National Cyber Strategy (NCS) that, if fully implemented, could address claims that the critical issue of current cyberspace threats are not being taken seriously enough. The report outlines a plan that spans all federal agencies, directing how they should work separately and in tandem with private industry and the public to detect and prevent cyber attacks before they happen, as well as mitigate damage in the aftermath.
The NCS is the first formal attempt in 15 years to plan and implement a national policy for the cyber arena and takes the form of a high-level policy statement rather than the more targeted method of a Presidential directive. The plan offers plenty in the way of big picture goals, but critics will watch to to see whether forthcoming details will emerge in the coming months and years to fill in the gaps with specific action.
With the release, the Administration formally recognizes that cyberspace has become such an entwined part of American society as to be functionally inseparable. The bottom line is that cybersecurity now falls under the larger umbrella of national security and is not considered a standalone entity.
Army Lt. Gen. Paul Nakasone, speaking at his recent confirmation hearing for the position of leader of U.S. Cyber Command and the secretive National Security Agency, emphasized the importance of this moment in our national history: “We are at a defining time for our Nation and our military…threats to the United States’ global advantage are growing — nowhere is this challenge more manifest than in cyberspace.”
Sifting through the digital pages of the NCS document reveals the Administration’s focus on the four conceptual pillars of National Security that now have been expanded to accommodate cyber concerns.
Pillar 1: Protecting and Securing the American Way of Life
Considering the present mashup state of the federal procurement process, the new aim is to secure government computer networks and information, primarily through tougher standards, cross-agency cooperation, and the strengthening of US government contractor systems and supply chain management. Electronic surveillance laws will also likely be bolstered, a reality that may result in the netting of more criminals but poses privacy concerns to those who think that the line has been smudged too times in this area already.
Securing all levels of election infrastructure against hacks and misinformation falls into this category. If recent history is any indication, the coming 2020 presidential election will likely inspire a flurry of attempted cyber intrusions.
Pillar 2: Focus on American Prosperity
Operating on the assumption that economic security is intrinsically linked to national security, the NCS lays out a strategy to achieve financial strength through fortification of the technological ecosystem. Plans are to be developed to support and reward those in the marketplace who create, adopt, and push forward the innovation of online security processes.
Though debates over funds for national infrastructure are eternal, the discussion will now expand to include the security and promotion of technology infrastructure as well, especially as it relates to the 5G network protocol, quantum computing, blockchain technology, and artificial intelligence.
Pillar 3: Peace Through Strength
As the world becomes ever more digitized, criminals have moved offline operations into cyberspace. Perhaps unsurprisingly, the Trump Administration intends to push back hard against efforts to disrupt, deter, degrade, or destabilize the world from both nations and non-nation actors.
National security advisor John Bolton, though refusing to specify operations or adversaries, emphasized the point to USA Today that aggressive action should be expected, saying, “We are going to do a lot of things offensively. Our adversaries need to know that.”
At least part of this offensive strategy will include the creation of an international law framework (called the CDI or Cyber Deterrence Initiative) that will be charged with policing cyberspace behavior and organizing a cooperative response for those who flaunt the standards. The CDI’s stated goals will be to counter sources of online disinformation and propaganda with its own brand of the same.
Pillar 4: Advance American Influence
By staking out an America-first role as thought and action leader in cyberspace, the NCS promises to take the lead in collaborating with like-minded partners to create and preserve a secure, free internet. Considering the well-known surveillance efforts of organizations like the Five Eyes, one can’t help but wonder if the term “internet freedom” is an oxymoron in the making with the government leading the way.
With the NCS, the Trump Administration has laid out a broad platform for addressing cybersecurity concerns. If it’s the down and dirty details of how exactly this will happen you seek, sorry to disappoint, but it’s not in there.
With the next big election close enough to smell, and Congress divided, little to nothing of legislative importance will likely unfold in the near future, including Democrats and Republicans finding the motivation to drag out their Crayons and fill in the president’s cybersecurity outline.
Until then, let’s hope the internet doesn’t implode under an onslaught of fake news, cat videos, and hackers gone wild. One thing you can bet your last dollar on — the topic of cybersecurity won’t go away. Like national security in general, it will remain eternal fodder for future politicians to bat around. As to whether the NCS will actually make a difference, only time will tell.
Meanwhile, Nero fiddles and Rome burns.
About the author: A former defense contractor for the US Navy, Sam Bocetta turned to freelance journalism in retirement, focusing his writing on US diplomacy and national security, as well as technology trends in cyberwarfare, cyberdefense, and cryptography.
Copyright 2010 Respective Author at Infosec Island
When building a threat Intelligence team you will face a range of challenges and problems. One of the most significant ones is about how to best take on the ever-growing amount of Threat Intel. It might sound like a luxurious problem to have: The more intel the better! But if you take a closer look at what the available Threat Intelligence supply looks like, or rather, the way it is packaged, the problem becomes apparent. Ideally, you would want to take this ever-growing field of Threat Intelligence supply and work to converge on a central data model – specifically, STIX (Structured Threat Information eXpression). STIX is an open standard language supported by the OASIS open standards body, designed to represent structured information about cyber threats
This isn’t a solo effort, so first the intelligence team needs to align properly with the open standards bodies. I was thrilled to deliver our theories around STIX data modeling to the OASIS and FIRST communities at the Borderless Cyber Conference in Prague in 2017. (The slides from this are available for download here.) Our team took this to the next level as we started to include not just standard data structures in our work, but standardized libraries, including MITRE’s ATT&CK (Adversarial Tactics, Techniques & Common Knowledge) framework that now forms a core part of our TTP (and, to some extent, Threat Actor) mapping across our knowledge base. We couldn’t have done it without the awesome folk at OASIS and MITRE. Those communities are still our cultural home.
So far, so good… but largely academic. The one thing I always say to teams who start planning their CTI journeys is: “Deploy your theory to practice ASAP – because it will change.” CTI suppliers know this all too well. In the ensuing months of our threat intel team, we faced the challenge of merging these supplier sources in to a centralized knowledge base. We’re currently up to 38 unique source organizations (with 50+ unique feeds across those suppliers), around a third of those being top-flight commercial suppliers. And, of course, even in this age of STIX, and MISP, we still see the full spectrum of implementations from those suppliers. Don’t get me wrong – universal STIX adoption is a utopia (this is my version of ‘memento mori’ that I should get my team to say to me every time I go on my evangelism sprees). And we should not expect all suppliers to ‘conform’ in some totalitarian way. But here is my question to you: Who designs your data model? I would love to meet them.
Now here’s the thing: If you’re anything like my boss, you probably don’t care how the data model is implemented – so long as the customer can get the data fields they need from your feed, what does it matter? REST + JSON everywhere, right? But the future doesn’t look like that. The one thing that the STIX standard is teaching people better than most other structured languages is the importance of decentralization. I should be able to use the STIX model to build intelligence in one location and have it be semantically equivalent (though not necessarily the same) as the equivalent built by a different analyst in another location. The two outputs should be logically similar – recognizably so, by some form of automated interpretation that doesn’t require polymorphism or a cryptomining rig to calculate – but different enough to capture the unique artistry of the analysts who created them. Those automatically discernible differences are the pinnacle of a shared, structured-intelligence knowledge base that will keep our data relevant, allow for automated cross-referencing and take the industry to the next level.
There is a downside, of course. The cost of implementation is the first hurdle – it may mean reengineering a data model and maybe even complete rebuilds of knowledge repositories. With any luck, it can just be a semantic modelling (similar to what I presented at Borderless Cyber, but instead of STIX 1.2 à STIX 2.1, just à STIX 2.1) that you can describe with some simple mapping and retain your retcon. But perhaps the biggest elephant in the room is that aligning all suppliers to a common data model means leaving people open to de-duplication and cross-referencing. As we start to unify our data models, that “super-secret source” that was actually just a re-package of some low-profile, open source feed is going to get doxed. We think this is a good thing – data quality, uniqueness and provenance will speak for themselves, and those suppliers who vend noise will lose business. This should be an opportunity rather than a threat, and hopefully it will reinforce supplier business models to provide truly valuable intelligence to customers.
About the author: Chris O’Brien is the Director Intelligence Operations at EclecticIQ. Prior to his current role, Chris held the post of Deputy Technical Director at NCSC UK specialising in technical knowledge management to support rapid response to cyber incidents.
Copyright 2010 Respective Author at Infosec Island
Until recently, Chief Executive Officers (CEOs) received information and reports encouraging them to consider information and cyber security risk. However, not all of them understood how to respond to those risks and the implications for their organizations. A thorough understanding of what happened, and why it is necessary to properly understand and respond to underlying risks, is needed by the CEO, as well as all members of an organization’s BoD, in today’s global business climate. Without this understanding, risk analyses and resulting decisions may be flawed, leading organizations to take on greater risk than intended.
After reviewing the current threat landscape, I want to call specific attention to four prevalent areas of information security that all CEOs need to be familiar with in the day to day running of their organization.
Cyberspace is an increasingly attractive hunting ground for criminals, activists and terrorists motivated to make money, get noticed, cause disruption or even bring down corporations and governments through online attacks. Over the past few years, we’ve seen cybercriminals demonstrating a higher degree of collaboration amongst themselves a degree of technical competency that caught many large organizations unawares.
CEOs must be prepared for the unpredictable so they have the resilience to withstand unforeseen, high impact events. Cybercrime, along with the increase in online causes (hacktivism), the increase in cost of compliance to deal with the uptick in regulatory requirements coupled with the relentless advances in technology against a backdrop of under investment in security departments, can all combine to cause the perfect threat storm. Organizations that identify what the business relies on most will be well placed to quantify the business case to invest in resilience, therefore minimizing the impact of the unforeseen.
Avoiding Reputational Damage
Attackers have become more organized, attacks have become more sophisticated, and all threats are more dangerous, and pose more risks, to an organization’s reputation. In addition, brand reputation and the trust dynamic that exists amongst suppliers, customers and partners have appeared as very real targets for the cybercriminal and hacktivist. With the speed and complexity of the threat landscape changing on a daily basis, all too often we’re seeing businesses being left behind, sometimes in the wake of reputational and financial damage.
CEOs need to ensure they are fully prepared to deal with these ever-emerging challenges by equipping their organizations better to deal with attacks on their reputations. This may seem obvious, but the faster you can respond to these attacks on reputation, the better your outcomes will be.
Securing the Supply Chain
When I look for key areas where information security may be lacking, one place I always come back to is the supply chain. Supply chains are the backbone of today’s global economy and businesses are increasingly concerned about managing major supply chain disruptions. Rightfully so, CEOs should be concerned about how open their supply chains are to various risk factors. Businesses must focus on the most vulnerable spots in their supply chains now. The unfortunate reality of today’s complex global marketplace is that not every security compromise can be prevented beforehand.
Being proactive now also means that you – and your suppliers – will be better able to react quickly and intelligently when something does happen. In extreme but entirely possible scenarios, this readiness and resiliency may dictate competitiveness, financial health, share price, or even business survival.
Employee Awareness and Embedded Behavior
Organizations continue to heavily invest in ‘developing human capital’. No CEOs speech or annual report would be complete without stating its value. The implicit idea behind this is that awareness and training always deliver some kind of value with no need to prove it – employee satisfaction was considered enough. This is no longer the case. Today’s CEOs often demand return on investment forecasts for the projects that they have to choose between, and awareness and training are no exception. Evaluating and demonstrating their value is becoming a business imperative. Unfortunately, there is no single process or method for introducing information security behavior change, as organizations vary so widely in their demographics, previous experiences and achievements and goals.
While many organizations have compliance activities which fall under the general heading of ‘security awareness’, the real commercial driver should be risk, and how new behaviors can reduce that risk. The time is right and the opportunity to shift away from awareness to tangible behaviors has never been greater. CEOs have become more cyber-savvy, and regulators and stakeholders continually push for stronger governance, particularly in the area of risk management. Moving to behavior change will provide the CISO with the ammunition needed to provide positive answers to questions that are likely to be posed by the CEO and other members of the senior management team.
Stay Ahead of Possible Security Stumbling Blocks
Businesses of all shapes and sizes are operating in a progressively cyber-enabled world and traditional risk management isn’t agile enough to deal with the risks from activity in cyberspace. Enterprise risk management must be extended to create risk resilience, built on a foundation of preparedness, that evaluates the threat vectors from a position of business acceptability and risk profiling.
Organizations have varying degrees of control over evolving security threats and with the speed and complexity of the threat landscape changing on a daily basis, far too often I’m seeing businesses getting left behind, sometimes in the wake of reputational and financial damage. CEOs need to take the lead and take stock now in order to ensure that their organizations are better prepared and engaged to deal with these ever-emerging challenges.
About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.
Copyright 2010 Respective Author at Infosec Island
Posted by Andrew Ahn, Product Manager, Google Play
[Cross-posted from the Android Developers Blog]
Google Play is committed to providing a secure and safe platform for billions of Android users on their journey discovering and experiencing the apps they love and enjoy. To deliver against this commitment, we worked last year to improve our abuse detection technologies and systems, and significantly increased our team of product managers, engineers, policy experts, and operations leaders to fight against bad actors.
In 2018, we introduced a series of new policies to protect users from new abuse trends, detected and removed malicious developers faster, and stopped more malicious apps from entering the Google Play Store than ever before. The number of rejected app submissions increased by more than 55 percent, and we increased app suspensions by more than 66 percent. These increases can be attributed to our continued efforts to tighten policies to reduce the number of harmful apps on the Play Store, as well as our investments in automated protections and human review processes that play critical roles in identifying and enforcing on bad apps.
In addition to identifying and stopping bad apps from entering the Play Store, our Google Play Protect system now scans over 50 billion apps on users’ devices each day to make sure apps installed on the device aren’t behaving in harmful ways. With such protection, apps from Google Play are eight times less likely to harm a user’s device than Android apps from other sources.
Here are some areas we’ve been focusing on in the last year and that will continue to be a priority for us in 2019:
Protecting User Privacy
Protecting users’ data and privacy is a critical factor in building user trust. We’ve long required developers to limit their device permission requests to what’s necessary to provide the features of an app. Also, to help users understand how their data is being used, we’ve required developers to provide prominent disclosures about the collection and use of sensitive user data. Last year, we rejected or removed tens of thousands of apps that weren’t in compliance with Play’s policies related to user data and privacy.
In October 2018, we announced a new policy restricting the use of the SMS and Call Log permissions to a limited number of cases, such as where an app has been selected as the user’s default app for making calls or sending text messages. We’ve recently started to remove apps from Google Play that violate this policy. We plan to introduce additional policies for device permissions and user data throughout 2019.
We find that over 80% of severe policy violations are conducted by repeat offenders and abusive developer networks. When malicious developers are banned, they often create new accounts or buy developer accounts on the black market in order to come back to Google Play. We’ve further enhanced our clustering and account matching technologies, and by combining these technologies with the expertise of our human reviewers, we’ve made it more difficult for spammy developer networks to gain installs by blocking their apps from being published in the first place.
Harmful app contents and behaviors
As mentioned in last year’s blog post, we fought against hundreds of thousands of impersonators, apps with inappropriate content, and Potentially Harmful Applications (PHAs). In a continued fight against these types of apps, not only do we apply advanced machine learning models to spot suspicious apps, we also conduct static and dynamic analyses, intelligently use user engagement and feedback data, and leverage skilled human reviews, which have helped in finding more bad apps with higher accuracy and efficiency.
Despite our enhanced and added layers of defense against bad apps, we know bad actors will continue to try to evade our systems by changing their tactics and cloaking bad behaviors. We will continue to enhance our capabilities to counter such adversarial behavior, and work relentlessly to provide our users with a secure and safe app store.
How useful did you find this blog post?
★ ★ ★ ★ ★
by nsadmin on February 13, 2019
I did a podcast with Mark Miller over at DevSecOps days. It was a fun conversation, and you can have a listen at “Anticipating Failure through Threat Modeling w/ Adam Shostack.”
I had not heard about this case before. Zurich Insurance has refused to pay Mondelez International’s claim of $100 million in damages from NotPetya. It claims it is an act of war and therefor not covered. Mondelez is suing.
Those turning to cyber insurance to manage their exposure presently face significant uncertainties about its promise. First, the scope of cyber risks vastly exceeds available coverage, as cyber perils cut across most areas of commercial insurance in an unprecedented manner: direct losses to policyholders and third-party claims (clients, customers, etc.); financial, physical and IP damages; business interruption, and so on. Yet no cyber insurance policies cover this entire spectrum. Second, the scope of cyber-risk coverage under existing policies, whether traditional general liability or property policies or cyber-specific policies, is rarely comprehensive (to cover all possible cyber perils) and often unclear (i.e., it does not explicitly pertain to all manifestations of cyber perils, or it explicitly excludes some).
But it is in the public interest for Zurich and its peers to expand their role in managing cyber risk. In its ideal state, a mature cyber insurance market could go beyond simply absorbing some of the damage of cyberattacks and play a more fundamental role in engineering and managing cyber risk. It would allow analysis of data across industries to understand risk factors and develop common metrics and scalable solutions. It would allow researchers to pinpoint sources of aggregation risk, such as weak spots in widely relied-upon software and hardware platforms and services. Through its financial levers, the insurance industry can turn these insights into action, shaping private-sector behavior and promoting best practices internationally. Such systematic efforts to improve and incentivize cyber-risk management would redress the conditions that made NotPetya possible in the first place. This, in turn, would diminish the onus on governments to retaliate against attacks.
A man-in-the-middle (MitM) attack is when an attacker intercepts communications between two parties either to secretly eavesdrop or modify traffic traveling between the two. Attackers might use MitM attacks to steal login credentials or personal information, spy on the victim, or sabotage communications or corrupt data.
“MITM attacks are a tactical means to an end,” says Zeki Turedi, technology strategist, EMEA at CrowdStrike. “The aim could be spying on individuals or groups to redirecting efforts, funds, resources, or attention.”
In his 2008 white paper that first proposed bitcoin, the anonymous Satoshi Nakamoto concluded with: “We have proposed a system for electronic transactions without relying on trust.” He was referring to blockchain, the system behind bitcoin cryptocurrency. The circumvention of trust is a great promise, but it’s just not true. Yes, bitcoin eliminates certain trusted intermediaries that are inherent in other payment systems like credit cards. But you still have to trust bitcoin — and everything about it.
Much has been written about blockchains and how they displace, reshape, or eliminate trust. But when you analyze both blockchain and trust, you quickly realize that there is much more hype than value. Blockchain solutions are often much worse than what they replace.
First, a caveat. By blockchain, I mean something very specific: the data structures and protocols that make up a public blockchain. These have three essential elements. The first is a distributed (as in multiple copies) but centralized (as in there’s only one) ledger, which is a way of recording what happened and in what order. This ledger is public, meaning that anyone can read it, and immutable, meaning that no one can change what happened in the past.
The second element is the consensus algorithm, which is a way to ensure all the copies of the ledger are the same. This is generally called mining; a critical part of the system is that anyone can participate. It is also distributed, meaning that you don’t have to trust any particular node in the consensus network. It can also be extremely expensive, both in data storage and in the energy required to maintain it. Bitcoin has the most expensive consensus algorithm the world has ever seen, by far.
Finally, the third element is the currency. This is some sort of digital token that has value and is publicly traded. Currency is a necessary element of a blockchain to align the incentives of everyone involved. Transactions involving these tokens are stored on the ledger.
Private blockchains are completely uninteresting. (By this, I mean systems that use the blockchain data structure but don’t have the above three elements.) In general, they have some external limitation on who can interact with the blockchain and its features. These are not anything new; they’re distributed append-only data structures with a list of individuals authorized to add to it. Consensus protocols have been studied in distributed systems for more than 60 years. Append-only data structures have been similarly well covered. They’re blockchains in name only, and — as far as I can tell — the only reason to operate one is to ride on the blockchain hype.
All three elements of a public blockchain fit together as a single network that offers new security properties. The question is: Is it actually good for anything? It’s all a matter of trust.
Trust is essential to society. As a species, humans are wired to trust one another. Society can’t function without trust, and the fact that we mostly don’t even think about it is a measure of how well trust works.
The word “trust” is loaded with many meanings. There’s personal and intimate trust. When we say we trust a friend, we mean that we trust their intentions and know that those intentions will inform their actions. There’s also the less intimate, less personal trust — we might not know someone personally, or know their motivations, but we can trust their future actions. Blockchain enables this sort of trust: We don’t know any bitcoin miners, for example, but we trust that they will follow the mining protocol and make the whole system work.
Most blockchain enthusiasts have a unnaturally narrow definition of trust. They’re fond of catchphrases like “in code we trust,” “in math we trust,” and “in crypto we trust.” This is trust as verification. But verification isn’t the same as trust.
In 2012, I wrote a book about trust and security, Liars and Outliers. In it, I listed four very general systems our species uses to incentivize trustworthy behavior. The first two are morals and reputation. The problem is that they scale only to a certain population size. Primitive systems were good enough for small communities, but larger communities required delegation, and more formalism.
The third is institutions. Institutions have rules and laws that induce people to behave according to the group norm, imposing sanctions on those who do not. In a sense, laws formalize reputation. Finally, the fourth is security systems. These are the wide varieties of security technologies we employ: door locks and tall fences, alarm systems and guards, forensics and audit systems, and so on.
These four elements work together to enable trust. Take banking, for example. Financial institutions, merchants, and individuals are all concerned with their reputations, which prevents theft and fraud. The laws and regulations surrounding every aspect of banking keep everyone in line, including backstops that limit risks in the case of fraud. And there are lots of security systems in place, from anti-counterfeiting technologies to internet-security technologies.
In his 2018 book, Blockchain and the New Architecture of Trust, Kevin Werbach outlines four different “trust architectures.” The first is peer-to-peer trust. This basically corresponds to my morals and reputational systems: pairs of people who come to trust each other. His second is leviathan trust, which corresponds to institutional trust. You can see this working in our system of contracts, which allows parties that don’t trust each other to enter into an agreement because they both trust that a government system will help resolve disputes. His third is intermediary trust. A good example is the credit card system, which allows untrusting buyers and sellers to engage in commerce. His fourth trust architecture is distributed trust. This is emergent trust in the particular security system that is blockchain.
What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.
When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?
Blockchain enthusiasts point to more traditional forms of trust — bank processing fees, for example — as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.
Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Ethereum. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility — that’s when the people in charge of a blockchain step outside the system to change it — people will need to be in charge.
Any blockchain system will have to coexist with other, more conventional systems. Modern banking, for example, is designed to be reversible. Bitcoin is not. That makes it hard to make the two compatible, and the result is often an insecurity. Steve Wozniak was scammed out of $70K in bitcoin because he forgot this.
Blockchain technology is often centralized. Bitcoin might theoretically be based on distributed trust, but in practice, that’s just not true. Just about everyone using bitcoin has to trust one of the few available wallets and use one of the few available exchanges. People have to trust the software and the operating systems and the computers everything is running on. And we’ve seen attacks against wallets and exchanges. We’ve seen Trojans and phishing and password guessing. Criminals have even used flaws in the system that people use to repair their cell phones to steal bitcoin.
Moreover, in any distributed trust system, there are backdoor methods for centralization to creep back in. With bitcoin, there are only a few miners of consequence. There’s one company that provides most of the mining hardware. There are only a few dominant exchanges. To the extent that most people interact with bitcoin, it is through these centralized systems. This also allows for attacks against blockchain-based systems.
These issues are not bugs in current blockchain applications, they’re inherent in how blockchain works. Any evaluation of the security of the system has to take the whole socio-technical system into account. Too many blockchain enthusiasts focus on the technology and ignore the rest.
To the extent that people don’t use bitcoin, it’s because they don’t trust bitcoin. That has nothing to do with the cryptography or the protocols. In fact, a system where you can lose your life savings if you forget your key or download a piece of malware is not particularly trustworthy. No amount of explaining how SHA-256 works to prevent double-spending will fix that.
Similarly, to the extent that people do use blockchains, it is because they trust them. People either own bitcoin or not based on reputation; that’s true even for speculators who own bitcoin simply because they think it will make them rich quickly. People choose a wallet for their cryptocurrency, and an exchange for their transactions, based on reputation. We even evaluate and trust the cryptography that underpins blockchains based on the algorithms’ reputation.
To see how this can fail, look at the various supply-chain security systems that are using blockchain. A blockchain isn’t a necessary feature of any of them. The reasons they’re successful is that everyone has a single software platform to enter their data in. Even though the blockchain systems are built on distributed trust, people don’t necessarily accept that. For example, some companies don’t trust the IBM/Maersk system because it’s not their blockchain.
Irrational? Maybe, but that’s how trust works. It can’t be replaced by algorithms and protocols. It’s much more social than that.
Still, the idea that blockchains can somehow eliminate the need for trust persists. Recently, I received an email from a company that implemented secure messaging using blockchain. It said, in part: “Using the blockchain, as we have done, has eliminated the need for Trust.” This sentiment suggests the writer misunderstands both what blockchain does and how trust works.
Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain — of course, then they wouldn’t have the cool name.
Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.
To answer the question of whether the blockchain is needed, ask yourself: Does the blockchain change the system of trust in any meaningful way, or just shift it around? Does it just try to replace trust with verification? Does it strengthen existing trust relationships, or try to go against them? How can trust be abused in the new system, and is this better or worse than the potential abuses in the old system? And lastly: What would your system look like if you didn’t use blockchain at all?
If you ask yourself those questions, it’s likely you’ll choose solutions that don’t use public blockchain. And that’ll be a good thing — especially when the hype dissipates.
This essay previously appeared on Wired.com.
I have wanted to write this essay for over a year. The impetus to finally do it came from an invite to speak at the Hyperledger Global Forum in December. This essay is a version of the talk I wrote for that event, made more accessible to a general audience.
It seems to be the season for blockchain takedowns. James Waldo has an excellent essay in Queue. And Nicholas Weaver gave a talk at the Enigma Conference, summarized here. It’s a shortened version of this talk.
Threats to online security are constantly evolving, and organisations are more aware than ever of the risks that it can pose. But no matter how seriously cyber security is viewed by most businesses, many still fall short of properly addressing some of the biggest issues. In fact, recent figures from the government show that over four in ten UK businesses have suffered a cyber breach or attack within the last 12 months.
Two of the most common attacks are due to issues with basic computer hygiene, including fraudulent emails and cyber criminals impersonating organisations. The bigger question isn’t how to secure your business, but who takes ownership of the cyber security process.
Not just the IT department’s responsibility
The responsibility for an organisation’s cyber security often falls on the IT department, which historically dealt with the security of IT systems. At face value this makes sense – as the resident tech experts, the IT department is often best positioned to choose the tools and solutions that make a business secure.
In general, these tools serve the purpose of assessing and encrypting your sensitive information, or blocking malicious activity at the source. But cyber threats can often begin outside the IT department. It only takes a single staff member opening a malicious attachment or clicking on a link in a phishing email for hackers to find a way in, and sometimes even the most sophisticated cyber security solutions can’t prevent this.
This makes it next to impossible for the IT department to keep the entire organisation secure, since they can’t be constantly monitoring every person’s click of the mouse. The onus, therefore, falls on every single staff member within the organisation to be cyber aware.
Do the board need to be involved?
High-profile, malicious attacks, such as WannaCry and NotPetya, have grown increasingly prolific in recent years. The potentially devastating effects of these attacks has meant that cyber security has become an integral facet of an organisation’s risk assessment and management.
But despite the prevalence of these successful attacks, there is often still a lack of understanding amongst some board members when it comes to tackling these threats – in fact, our analysis found that only 30% of senior leadership teams have an in-depth understanding of the risks associated with evolving cyber threats.
Flagging the importance of cyber awareness with the board is therefore essential, particularly to increase their awareness of the most common cyber threats and any potential security gaps. More pressingly, the board often have direct access to the most sensitive data within your organisation, which makes them the perfect target for potential cyber criminals. Arming the board with the tools and knowledge to spot potentially malicious emails, links or attachments – in the same way that you would the rest of the organisation – could help to prevent potentially disastrous consequences.
It’s everybody’s responsibility
Although cyber security certainly does need to be a board-level concern, it’s still important to remember that the safety of your organisation is everybody’s responsibility. As a security and technology expert within the business, you have an integral role in ensuring that everybody’s knowledge is up to scratch.
Thoroughly educating staff on the warning signs to look out for in order to spot a malicious email, or activities that they should avoid when using business devices can greatly improve the overall cyber security of your business. When combined with encryption, and other online security tools, the likelihood of experiencing a cyber attack can be greatly diminished. Cyber security is everybody’s responsibility – make sure that staff have the tools, and the knowledge, to do it properly.
Matt Johnson is Chief Technology Officer at Intercity Technology. With over 25 years’ business and technical experience in providing IT solutions, Matt’s expertise covers the design, implementation, support and management of complex communications networks.
Copyright 2010 Respective Author at Infosec Island
Biometric authentication uses physical or behavioral human characteristics to digitally identify a person to grant access to systems, devices or data. Examples of these biometric identifiers are fingerprints, facial patterns, voice or typing cadence. Each of these identifiers is considered unique to the individual, and they may be used in combination to ensure greater accuracy of identification.
Zack Whittaker reports via TechCrunch: A reader contacted TechCrunch after his [OkCupid] account was hacked. The reader, who did not want to be named, said the hacker broke in and changed his password, locking him out of his account. Worse, they changed his email address on file, preventing him from resetting his password. OkCupid didn’t send an email to confirm the address change — it just blindly accepted the change. “Unfortunately, we’re not able to provide any details about accounts not connected to your email address,” said OkCupid’s customer service in response to his complaint, which he forwarded to TechCrunch. Then, the hacker started harassing him strange text messages from his phone number that was lifted from one of his private messages. It wasn’t an isolated case. We found several cases of people saying their OkCupid account had been hacked.
But several users couldn’t explain how their passwords — unique to OkCupid and not used on any other app or site — were inexplicably obtained. “There has been no security breach at OkCupid,” said Natalie Sawyer, a spokesperson for OkCupid. “All websites constantly experience account takeover attempts. There has been no increase in account takeovers on OkCupid.” Even on OkCupid’s own support pages, the company says that account takeovers often happen because someone has an account owner’s login information. “If you use the same password on several different sites or services, then your accounts on all of them have the potential to be taken over if one site has a security breach,” says the support page. In fact, when we checked, OkCupid was just one of many major dating sites — like Match, PlentyOfFish, Zoosk, Badoo, JDate, and eHarmony — that didn’t use two-factor authentication at all.
Read more of this story at Slashdot.
An anonymous reader quotes Forbes:
A massive majority of consumers believe that using their data to personalize ads is unethical. And a further 76% believe that personalization to create tailored newsfeeds — precisely what Facebook, Twitter, and other social applications do every day — is unethical.
At least, that’s what they say on surveys.
RSA surveyed 6,000 adults in Europe and America to evaluate how our attitudes are changing towards data, privacy, and personalization. The results don’t look good for surveillance capitalism, or for the free services we rely on every day for social networking, news, and information-finding. “Less than half (48 percent) of consumers believe there are ethical ways companies can use their data,” RSA, a fraud prevention and security company, said when releasing the survey results. Oh, and when a compan y gets hacked? Consumers blame the company, not the hacker, the report says.
Read more of this story at Slashdot.
“While it’s creepy to imagine companies are listening in to your conversations, it’s perhaps more creepy that they can predict what you’re talking about without actually listening,” writes an NBC News technology correspondent, arguing that data, not privacy, is the real danger.
Your data — the abstract portrait of who you are, and, more importantly, of who you are compared to other people — is your real vulnerability when it comes to the companies that make money offering ostensibly free services to millions of people. Not because your data will compromise your personal identity. But because it will compromise your personal autonomy. “Privacy as we normally think of it doesn’t matter,” said Aza Raskin, co-founder of the Center for Humane Technology [and a former Mozilla team leader]. “What these companies are doing is building little models, little avatars, little voodoo dolls of you. Your doll sits in the cloud, and they’ll throw 100,000 videos at it to see what’s effective to get you to stick around, or what ad with what messaging is uniquely good at getting you to do something….”
With 2.3 billion users, “Facebook has one of these models for one out of every four humans on earth. Every country, culture, behavior type, socio-economic background,” said Raskin. With those models, and endless simulations, the company can predict your interests and intentions before you even know them…. Without having to attach your name or address to your data profile, a company can nonetheless compare you to other people who have exhibited similar online behavior…
A professor at Columbia law school decries the concentrated power of social media as “a single point of failure for democracy.” But the article also warns about the dangers of health-related data collected from smartwatches. “How will people accidentally cursed with the wrong data profile get affordable insurance?”
Read more of this story at Slashdot.
Texas Secretary of State David Whitley answered questions over the decision to give prosecutors a list of 95,000 individuals on the stateâ€™s voter rolls who were considered potential noncitizens before vetting the data, according to U.S. News & World Report. The data wrongly included naturalized citizens. As such, the American Civil Liberties Union requested an injunction from a federal judge to stop the questioning of registered voters about their citizenship based on the list, and a lawsuit was filed by those who appeared erroneously on the list. Meanwhile,Â Connecticut legislators introduced a bipartisan House bill to prohibit the disclosure of voter registration data for commercial data but would allow some data to be made available to election and political committees.
This past Saturday, Michael Ferguson, the auditor general of Canada, passed away after a battle with cancer.Â He was only 60.Â There have been several tributes to him, including a good one in the Ottawa Citizen.
The auditor general is one of a small handful of officers of Parliament, including the federal privacy commissioner.Â What struck me in reading about Fergusonâ€™s lifetime work was, among a great many accomplishments, the fact that he was a rather outspoken public servant who was frustrated with the lack of progress made by our politicians when they received his advice.
Officers of Parliament are in a unique position, and on paper their role is invaluable. But what happens when they provide their advice â€” sometimes many times over â€” and both Parliament and the government do nothing meaningful in response?
As a lawyer, Iâ€™m paid to provide advice.Â Almost always, my clients accept my advice and act on it.Â It happens, on occasion, that they chose a different path. That is their prerogative. For officers of Parliament, however, I think there needs to be more accountability when elected officials do not meaningfully act on the advice they are getting from the people they have appointed.Â I think we, as citizens, have to hold these politicians to account. In the fall, we are going to be voting for a new federal government.Â One of the questions Iâ€™ll be asking each political party is what they have done to implement the advice from their officers of Parliament, and what they plan on doing going forward.Â After all, this is expert advice.Â Why are they so often not following it?
Air Canadaâ€™s app is found to have used an analytics service designed to capture the ways users interact with their phones while they use the product, Global News reports. The â€œsession replayâ€� service records a userâ€™s phone screen in order to capture booked flights, changed passwords and credit card information. TechCrunch reports Air Canada is not the only company to use â€œsession replays,â€� as Hollister, Expedia and Hotels.com also use the service. â€œAir Canada uses customer provided information to ensure we can support their travel needs and to ensure we can resolve any issues that may affect their trips,â€� an Air Canada spokesperson said. â€œThis includes user information entered in, and collected on, the Air Canada mobile app. However, Air Canada does not â€” and cannot â€” capture phone screens outside of the Air Canada app.â€�