2021 key risk areas beyond the pandemic

Healix International has identified six key areas of risk – besides the continued impact of COVID-19 – for global organizations in 2021. Natural disasters The increasing frequency of extreme weather events with natural disasters becoming more pronounced both in terms of frequency and severity. Building resilience to natural disasters is a significant exercise. Faceless threats In a context of increased isolationism, and more time spent online, individuals will become increasingly disconnected from normative community activity … More

The post 2021 key risk areas beyond the pandemic appeared first on Help Net Security.

A new approach to scanning social media helps combat misinformation

Rice University researchers have discovered a more efficient way for social media companies to keep misinformation from spreading online using probabilistic filters trained with artificial intelligence. Combating misinformation on social media The new approach to scanning social media is outlined in a study presented by Rice computer scientist Anshumali Shrivastava and statistics graduate student Zhenwei Dai. Their method applies machine learning in a smarter way to improve the performance of Bloom filters, a widely used … More

The post A new approach to scanning social media helps combat misinformation appeared first on Help Net Security.

Hiding Malware in Social Media Buttons

Clever tactic:

This new malware was discovered by researchers at Dutch cyber-security company Sansec that focuses on defending e-commerce websites from digital skimming (also known as Magecart) attacks.

The payment skimmer malware pulls its sleight of hand trick with the help of a double payload structure where the source code of the skimmer script that steals customers’ credit cards will be concealed in a social sharing icon loaded as an HTML ‘svg’ element with a ‘path’ element as a container.

The syntax for hiding the skimmer’s source code as a social media button perfectly mimics an ‘svg’ element named using social media platform names (e.g., facebook_full, twitter_full, instagram_full, youtube_full, pinterest_full, and google_full).

A separate decoder deployed separately somewhere on the e-commerce site’s server is used to extract and execute the code of the hidden credit card stealer.

This tactic increases the chances of avoiding detection even if one of the two malware components is found since the malware loader is not necessarily stored within the same location as the skimmer payload and their true purpose might evade superficial analysis.

Digital thought clones manipulate real-time online behavior

In The Social Dilemma, the Netflix documentary that has been in the news recently for its radical revelations, former executives at major technology companies like Facebook, Twitter, and Instagram, among others, share how their ex-employers have developed sophisticated algorithms that not only predict users’ actions but also know which content will keep them hooked on their platforms.

digital thought clones

The knowledge that technology companies are preying on their users’ digital activities without their consent and awareness is well-known. But Associate Professor Jon Truby and Clinical Assistant Professor Rafael Brown at the Centre for Law and Development at Qatar University have pulled the curtain on another element that technology companies are pursuing to the detriment of people’s lives, and investigated what we can do about it.

“We had been working on the digital thought clone paper a year before the Netflix documentary aired. So, we were not surprised to see the story revealed by the documentary, which affirm what our research has found,” says Prof Brown, one of the co-authors.

Their paper identifies “digital thought clones,” which act as digital twins that constantly collect personal data in real-time, and then predict and analyze the data to manipulate people’s decisions.

Activity from apps, social media accounts, gadgets, GPS tracking, online and offline behavior and activities, and public records are all used to formulate what they call a “digital thought clone”.

Processing personalized data to test strategies in real-time

The paper defines digital thought clone as “a personalized digital twin consisting of a replica of all known data and behavior on a specific living person, recording in real-time their choices, preferences, behavioral trends, and decision making processes.”

“Currently existing or future artificial intelligence (AI) algorithms can then process this personalized data to test strategies in real-time to predict, influence, and manipulate a person’s consumer or online decisions using extremely precise behavioral patterns, and determine which factors are necessary for a different decision to emerge and run all kinds of simulations before testing it in the real world,” says Prof Truby, a co-author of the study.

An example is predicting whether a person will make the effort to compare online prices for a purchase, and if they do not, charging a premium for their chosen purchase. This digital manipulation reduces a person’s ability to make choices freely.

Outside of consumer marketing, imagine if financial institutions use digital thought clones to make financial decisions, such as whether a person would repay a loan.

What if insurance companies judged medical insurance applications by predicting the likelihood of future illnesses based on diet, gym membership, the distance applicants walk in a day–based on their phone’s location history–and their social circle, as generated by their phone contacts and social media groups, and other variables?

The authors suggest that the current views on privacy, where information is treated either as a public or private matter or viewed in contextual relationships of who the information concerns and impacts, are outmoded.

A human-centered framework is needed

A human-centered framework is needed, where a person can decide from the very beginning of their relationship with digital services if their data should be protected forever or until they freely waive it. This rests on two principles: the ownership principle that data belongs to the person, and that certain data is inherently protected; and the control principle, which requires that individuals be allowed to make changes to the type of data collected and if it should be stored. In this framework, people are asked beforehand if data can be shared with an unauthorized entity.

The European Union’s landmark GDPR and the CCPA of 2018 can serve as a foundation for governments everywhere to legislate on digital thought clones and all that they entail.

But the authors also raise critical moral and legal questions over the status of these digital thought clones. “Does privacy for humans mean their digital clones are protected as well? Are users giving informed consent to companies if their terms and conditions are couched in misleading language?” asks Prof Truby.

A legal distinction must be made between the digital clone and the biological source. Whether the digital clone can be said to have attained consciousness will be relevant to the inquiry but far more important would be to determine whether the digital clone’s consciousness is the same as that of the biological source.

The world is at a crossroads: should it continue to do nothing and allow for total manipulation by the technology industry or take control through much-needed legislation to ensure that people are in charge of their digital data? It’s not quite a social dilemma.

Why microlearning is the key to cybersecurity education

Cyber attacks are on the rise during this year of uncertainty and chaos. Increased working from home, online shopping, and use of social platforms to stay connected and sane during this year have provided criminals with many attack avenues to exploit.

microlearning cybersecurity

To mitigate the threat to their networks, systems and assets, many organizations perform some type of annual cybersecurity awareness education, as well as phishing simulations. Unfortunately, attackers are quick to adapt to changes while employees’ behavior changes slowly. Without a dramatic shift in how we educate employees about cybersecurity, all industries are going to see a rise in breaches and costs.

Changing the way people learn about cybersecurity

The average employee still doesn’t think about cybersecurity on a regular basis, because they haven’t been taught to “trust but verify,” but to “trust and be efficient.” But times are changing, and employees must be reminded on a daily basis and be aware that they (and the organization) are constantly under attack.

In the 1950s, there was a real push to increase industrial workplace safety. Worker safety and the number of days on a job site without an incident were made top of mind for all employees. How did they manage to force this shift? Through consistent messaging, with diverse ways of communicating, and by using daily reminders to ingrain the idea of security within the organization and change how it functioned.

Hermann Ebbinghaus, a German psychologist whose pioneering research on memory led to the discovery of forgetting and learning curves, explained that without regular reminders that keep learning in mind, we just forget even what’s important. One of the main goals of training must be to increase retention and overcome people’s natural tendency to forget information they don’t see as critical.

Paul Frankland, a neuroscientist and a senior fellow in CIFAR‘s Child & Brain Development program, and Blake Richards, a neurobiologist and an associate fellow in CIFAR’s Learning in Machines & Brains program, proposed that the real goal of memory is to optimize decision-making. “It’s important that the brain forgets irrelevant details and instead focuses on the stuff that’s going to help make decisions in the real world,” they said.

Right now, cybersecurity education is lost and forgotten in most employees’ brains. It has not become important enough to help them make better decisions in real-world situations.

A different kind of training is needed to become truly “cyber secure” – a training that keeps the idea of cybersecurity top of mind and part of the critical information retained in the brain.

Microlearning and gamification

Most organizations are used to relatively “static” training. For example: fire safety is fairly simple – everyone knows where the closest exit is and how to escape the building. Worker safety training is also very stagnant: wear a yellow safety vest and a hard hat, make sure to have steel toed shoes on a job site, etc.

The core messages for most trainings don’t evolve and change. That’s not the case with cybersecurity education and training: attacks are ever-changing, they differ based on the targeted demographic, current affairs, and the environment we are living in.

Cybersecurity education must be closely tied to the value and mission of an organization. It must also be adaptable and evolve with the changing times. Microlearning and gamification are new ways to help encourage and promote consistent cybersecurity learning. This is especially important because of the changing demographics: there are currently more millennials in the workforce than baby boomers, but the training methods have not altered dramatically in the last 30 years. Today’s employee is younger, more tech-savvy and socially connected. Modern training needs to acknowledge and utilize that.

Microlearning is the concept of learning or reviewing small chunks of information more frequently and repeating information in different formats. These variations, repetitions, and continued reminders help the user grasp and retain ideas for the long-term, instead of just memorizing them for a test and then forgetting them.

According to Eddinghaus, four weeks after a one-time training only 20 percent of the information originally learned is retained by the learner. Microlearning can change those numbers and increase retention to 80 or 90 percent.

Gamification amplifies specific game-playing elements within the training to include competition, points accumulation, leaderboards, badges, and battles. Gamification blends with microlearning by turning bite-sized chunks of learning into neurochemical triggers, releasing dopamine, endorphins, oxytocin, and serotonin. These chemicals help reduce stress and anxiety (sometimes associated with learning new material), increase „feel good sensations“ and feelings of connection.

Gamification increases the motivation to learn as well as knowledge recall by stimulating an area of the brain called the hippocampus. From a business perspective, 83% of employees who “receive gamified training feel motivated, while 61% of those who “receive non-gamified training feel bored and unproductive.”

Other reports indicate that companies who use gamification in their training have 60% higher engagement and find it enhances motivation by 50%. Combining microlearning with gamification helps create better training outcomes with more engaged, involved employees who remember and use the skills learned within the training.

Conclusion

The bad guys don’t stop learning and trying new things, meaning the good guys must, too.

Cybersecurity is increasingly central to the existence of an organization, but it’s fairly new, rapidly evolving, and often a source of fear and uncertainty in people. No one wants to admit their ignorance and yet, even cyber experts have a hard time keeping up with the constant changes in the industry. A highly supported microlearning program can help keep employees current and empower them with key decision-making knowledge.

ML tool identifies domains created to promote fake news

Academics at UCL and other institutions have collaborated to develop a machine learning tool that identifies new domains created to promote false information so that they can be stopped before fake news can be spread through social media and online channels.

promote fake news

To counter the proliferation of false information it is important to move fast, before the creators of the information begin to post and broadcast false information across multiple channels.

How does it work?

Anil R. Doshi, Assistant Professor for the UCL School of Management, and his fellow academics set out to develop an early detection system to highlight domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.

Doshi commented: “Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early.

“By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”

By applying a machine-learning model to domain registration data, the tool was able to correctly identify 92 percent of the false information domains and 96.2 percent of the non-false information domains set up in relation to the 2016 US election before they started operations.

Why should it be used?

The researchers propose that their tool should be used to help regulators, platforms, and policy makers proceed with an escalated process in order to increase monitoring, send warnings or sanction them, and decide ultimately, whether they should be shut down.

The academics behind the research also call for social media companies to invest more effort and money into addressing this problem which is largely facilitated by their platforms.

Doshi continued “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey.

“We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening.

“Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”

The research is ongoing in recognition that the environment is constantly evolving and while the tool works well now, the bad actors will respond to it. This underscores the need for constant and ongoing innovation and research in this area.

How fake news detectors can be manipulated

Fake news detectors, which have been deployed by social media platforms like Twitter and Facebook to add warnings to misleading posts, have traditionally flagged online articles as false based on the story’s headline or content.

fake news detectors

However, recent approaches have considered other signals, such as network features and user engagements, in addition to the story’s content to boost their accuracies.

Fake news detectors manipulated through user comments

However, new research from a team at Penn State’s College of Information Sciences and Technology shows how these fake news detectors can be manipulated through user comments to flag true news as false and false news as true. This attack approach could give adversaries the ability to influence the detector’s assessment of the story even if they are not the story’s original author.

“Our model does not require the adversaries to modify the target article’s title or content,” explained Thai Le, lead author of the paper and doctoral student in the College of IST. “Instead, adversaries can easily use random accounts on social media to post malicious comments to either demote a real story as fake news or promote a fake story as real news.”

That is, instead of fooling the detector by attacking the story’s content or source, commenters can attack the detector itself.

The researchers developed a framework – called Malcom – to generate, optimize, and add malicious comments that were readable and relevant to the article in an effort to fool the detector.

Then, they assessed the quality of the artificially generated comments by seeing if humans could differentiate them from those generated by real users. Finally, they tested Malcom’s performance on several popular fake news detectors.

Malcom performed better than the baseline for existing models by fooling five of the leading neural network based fake news detectors more than 93% of the time. To the researchers’ knowledge, this is the first model to attack fake news detectors using this method.

The benefits

This approach could be appealing to attackers because they do not need to follow traditional steps of spreading fake news, which primarily involves owning the content.

The researchers hope their work will help those charged with creating fake news detectors to develop more robust models and strengthen methods to detect and filter-out malicious comments, ultimately helping readers get accurate information to make informed decisions.

“Fake news has been promoted with deliberate intention to widen political divides, to undermine citizens’ confidence in public figures, and even to create confusion and doubts among communities,” the team wrote in their paper.

Added Le, “Our research illustrates that attackers can exploit this dependency on users’ engagement to fool the detection models by posting malicious comments on online articles, and it highlights the importance of having robust fake news detection models that can defend against adversarial attacks.”

Disinformation campaigns can spread like wildfire on social media

76% of Americans believe they’ve encountered disinformation firsthand and 20% say they’ve shared information later shown to be incorrect or intentionally misleading, according to a research released by NortonLifeLock.

disinformation campaigns

Disinformation, or false information intended to mislead or deceive people, is commonly spread by social media users and bots – automated accounts controlled by software – with the intent to sow division among people, create confusion, and undermine confidence in the news surrounding major current events, such as the 2020 U.S. presidential election, COVID-19 and social justice movements.

“Social media has created ideological echo-chambers that make people more susceptible to disinformation,” said Daniel Kats, a senior principal researcher at NortonLifeLock Labs.

“Disinformation campaigns can spread like wildfire on social media and have a long-lasting impact, as people’s opinions and actions may be influenced by the false or misleading information being circulated.”

Fact-checking stop the spread of disinformation

No matter who or what posts the information, fact-checking is a best practice for consumers to help stop the spread of disinformation. According to the online survey of more than 2,000 US adults, 53% of Americans often question whether information they see on social media is disinformation or fact.

86% of Americans agree that disinformation has the ability to greatly influence someone’s opinion, but 58% acknowledge that disinformation could influence them.

Although 82% of Americans are very concerned about the spread of disinformation, 21% still say social media companies do not have the right to remove it from their platform, with Republicans being almost twice as likely as Democrats to feel this way (25% vs. 13%).

“From disinformation campaigns to deepfakes, it’s becoming increasingly difficult for people to tell real from fake online,” added Kats. “It’s important to maintain a healthy dose of skepticism and to fact check multiple sources – especially before sharing something – to help avoid spreading disinformation.”

OPIS

Additional findings

  • More than a third of Americans don’t know the true purpose of disinformation. Only 62% of Americans know that disinformation is created to cause a divide or rift between people; 72% of both Republicans and Democrats believe disinformation is created for political gain.
  • 79% of Americans believe social media companies have an obligation to remove disinformation from their platforms, with the majority of Democrats (87%), Republicans (75%) and Independents (75%) supporting this.
  • Democrats and Republicans disagree on who spreads disinformation the most, with Republicans most commonly stating news media outlets are most likely to spread disinformation (36%), and Democrats stating it’s U.S. politicians (28%).
  • Disinformation has taken a toll on relationships, with many Americans having argued with someone (36%), unfriended/unfollowed someone on social media (30%), or taken a break from social media altogether (28%) because of disinformation.

Phishing gangs mounting high-ticket BEC attacks, average loss now $80,000

Companies are losing money to criminals who are launching Business Email Compromise (BEC) attacks as a more remunerative line of business than retail-accounts phishing, APWG reveals. High-ticket BEC attacks Agari reported average wire transfer loss from BEC attacks smashed all previous frontiers, spiking from $54,000 in the first quarter to $80,183 in Q2 2020 as spearphishing gangs reached for bigger returns. Scammers also requested funds in 66 percent of BEC attack in the form of … More

The post Phishing gangs mounting high-ticket BEC attacks, average loss now $80,000 appeared first on Help Net Security.

New technique keeps your online photos safe from facial recognition algorithms

In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the internet.

photos online algorithms

An AI algorithm will identify a cat in the picture on the left but will not detect a cat in the picture on the right

Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one’s photos together via the people present in those photos using Google’s own image recognition technology.

Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines.

Safeguarding sensitive information in photos

Led by Professor Mohan Kankanhalli, Dean of the School of Computing at the National University of Singapore (NUS), the research team from the School’s Department of Computer Science has developed a technique that safeguards sensitive information in photos by making subtle changes that are almost imperceptible to humans but render selected features undetectable by known algorithms.

Visual distortion using currently available technologies will ruin the aesthetics of the photograph as the image needs to be heavily altered to fool the machines. To overcome this limitation, the research team developed a “human sensitivity map” that quantifies how humans react to visual distortion in different parts of an image across a wide variety of scenes.

The development process started with a study involving 234 participants and a set of 860 images. Participants were shown two copies of the same image and they had to pick out the copy that was visually distorted.

After analysing the results, the research team discovered that human sensitivity is influenced by multiple factors. These factors included things like illumination, texture, object sentiment and semantics.

Applying visual distortion with minimal disruption

Using this “human sensitivity map” the team fine-tuned their technique to apply visual distortion with minimal disruption to the image aesthetics by injecting them into areas with low human sensitivity.

The NUS team took six months of research to develop this novel technique.

“It is too late to stop people from posting photos on social media in the interest of digital privacy. However, the reliance on AI is something we can target as the threat from human stalkers pales in comparison to the might of machines. Our solution enables the best of both worlds as users can still post their photos online safe from the prying eye of an algorithm,” said Prof Kankanhalli.

End users can use this technology to help mask vital attributes on their photos before posting them online and there is also the possibility of social media platforms integrating this into their system by default. This will introduce an additional layer of privacy protection and peace of mind.

The team also plans to extend this technology to videos, which is another prominent type of media frequently shared on social media platforms.

How much is your data worth on the dark web?

Credit card details, online banking logins, and social media credentials are available on the dark web at worryingly low prices, according to Privacy Affairs.

dark web prices

  • Online banking logins cost an average of $35
  • Full credit card details including associated data cost $12-20
  • A full range of documents and account details allowing identity theft can be obtained for $1,500

Forged documents including driving licenses, passports, and auto-insurance cards can be ordered to match stolen data.

The research team scanned dark web marketplaces, forums, and websites, to create the price index for a range of products and services relating to personal data, counterfeit documents, and social media.

Online banking logins cost an average of $35

Online banking credentials typically include login information, as well as name and address of the account holder and specific details on how to access the account undetected.

Full credit card details including associated data costs: $12-20

Credit card details are usually formatted as a simple code that includes card number, associated dates and CVV, along with account holders’ data such as address, ZIP code, email address, and phone number.

A full range of documents and account details allowing identity theft can be obtained for $1285.

Criminals can switch the European ID for a U.S. passport for an additional $950, bringing the total to $2,235 for enough data and documents to do any number of fraudulent transactions.

Malware installation on compromised systems is prevalent

Remote installation of software on 1,000 computers at a time allows criminals to target the public with malware such as ransomware in various countries with a 70% success rate.

Stolen data is very easy to obtain

The general public needs to not only be aware of how prevalent the threat of identity theft is but also how to mitigate that threat by applying due diligence in all aspects of their daily lives.

Software vulnerabilities sometimes first announced on social media

Software vulnerabilities are more likely to be discussed on social media before they’re revealed on a government reporting site, a practice that could pose a national security threat, according to computer scientists at the U.S. Department of Energy’s Pacific Northwest National Laboratory.

vulnerabilities discussed social media

At the same time, those vulnerabilities present a cybersecurity opportunity for governments to more closely monitor social media discussions about software gaps, the researchers assert.

“Some of these software vulnerabilities have been targeted and exploited by adversaries of the United States. We wanted to see how discussions around these vulnerabilities evolved,” said lead author Svitlana Volkova, senior research scientist in the Data Sciences and Analytics Group at PNNL.

“Social cybersecurity is a huge threat. Being able to measure how different types of vulnerabilities spread across platforms is really needed.”

Social media – especially GitHub – leads the way

Their research showed that a quarter of social media discussions about software vulnerabilities from 2015 through 2017 appeared on social media sites before landing in the National Vulnerability Database, the official U.S. repository for such information. Further, for this segment of vulnerabilities, it took an average of nearly 90 days for the gap discussed on social media to show up in the national database.

The research focused on three social platforms – GitHub, Twitter and Reddit – and evaluated how discussions about software vulnerabilities spread on each of them. The analysis showed that GitHub, a popular networking and development site for programmers, was by far the most likely of the three sites to be the starting point for discussion about software vulnerabilities.

It makes sense that GitHub would be the launching point for discussions about software vulnerabilities, the researchers wrote, because GitHub is a platform geared towards software development.

The researchers found that for nearly 47 percent of the vulnerabilities, the discussions started on GitHub before moving to Twitter and Reddit. For about 16 percent of the vulnerabilities, these discussions started on GitHub even before they are published to official sites.

Codebase vulnerabilities are common

The research points at the scope of the issue, noting that nearly all commercial software codebases contain open-source sharing and that nearly 80 percent of codebases include at least one vulnerability.

Further, each commercial software codebase contains an average of 64 vulnerabilities. The National Vulnerability Database, which curates and publicly releases vulnerabilities known as Common Vulnerabilities and Exposures “is drastically growing,” the study says, “and includes more than 100,000 known vulnerabilities to date.”

In their paper, the researchers discuss which U.S. adversaries might take note of such vulnerabilities. They mention Russia, China and others and noted that there are differences in usage of the three platforms within those countries when exploiting software vulnerabilities.

According to the study, cyberattacks in 2017 later linked to Russia involved more than 200,000 victims, affected more than 300,000 computers, and caused about $4 billion in damages.

“These attacks happened because there were known vulnerabilities present in modern software,” the study says, “and some Advanced Persistent Threat groups effectively exploited them to execute a cyberattack.”

Bots or human: Both pose a threat

The researchers also distinguished between social media traffic generated by humans and automated messages from bots. A social media message crafted by an actual person and not generated by a machine will likely be more effective at raising awareness of a software vulnerability, the researchers found, emphasizing that it was important to differentiate the two.

“We categorized users as likely bots or humans, by using the Botometer tool,” the study says, “which uses a wide variety of user-based, friend, social network, temporal, and content-based features to perform bot vs. human classification.”

The tool is especially useful in separating bots from human discussions on Twitter, a platform that the researchers noted can be helpful for accounts seeking to spread an agenda.

Ultimately, awareness of social media’s ability to spread information about software vulnerabilities provides a heads-up for institutions, the study says.

“Social media signals preceding official sources could potentially allow institutions to anticipate and prioritize which vulnerabilities to address first,” it says.

“Furthermore, quantification of the awareness of vulnerabilities and patches spreading in online social environments can provide an additional signal for institutions to utilize in their open source risk-reward decision making.”

Why people talk a good game about privacy, but fail to follow up in real life?

While most people will say they are extremely concerned with their online privacy, previous experiments have shown that, in practice, users readily divulge privacy information online.

people privacy

A team of Penn State researchers identified a dozen subtle – but powerful – reasons that may shed light on why people talk a good game about privacy, but fail to follow up in real life.

“Most people will tell you they’re pretty worried about their online privacy and that they take precautions, such as changing their passwords,” said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory (MERL).

“But, in reality, if you really look at what people do online and on social media, they tend to reveal all too much. What we think is going on is that people make disclosures in the heat of the moment by falling for contextual cues that appear on an interface.”

Cues influence people to reveal information online

Sundar said that certain cues analyzed by the researchers significantly increased the chance that people would turn over private information such as social security numbers or phone numbers. The cues exploit common pre-existing beliefs about authority, bandwagon, reciprocity, sense-of-community, community-building, self-preservation, control, instant gratification, transparency, machine, publicness and mobility.

“What we did in this study is identify 12 different kinds of appeals that influence people to reveal information online,” said Sundar. “These appeals are based on rules of thumb that we all hold in our head, called heuristics.”

For example, the rule of thumb that ‘if most others reveal their information, then it is safe for me to disclose as well’ is labeled ‘bandwagon heuristic’ by the study.

“There are certainly more than 12 heuristics, but these are the dominant ones that play an important role in privacy disclosure,” added Sundar, who worked with Mary Beth Rosson, Professor-in-Charge of Human Computer Interaction and director of graduate programs in the College of Information Sciences and Technology.

The researchers explain that heuristics are mental shortcuts that could be triggered by cues on a website or mobile app.

“These cues may not always be obvious,” according to Rosson. “The bandwagon cue, for example, can be as simple as a statement that is added to a website or app to prompt information disclosure,” she added.

“For example, when you go on LinkedIn and you see a statement that says your profile is incomplete and that 70 percent of your connections have completed their profiles, that’s a cue that triggers your need to follow others – which is what we call a bandwagon effect,” said Sundar. “We found that those with a stronger pre-existing belief in ‘bandwagon heuristic’ were more likely to reveal personal information in such a scenario.”

Trust in authority

For the authority cue, Rosson said that a graphic that signals the site is being overseen by a trusted authority may make people comfortable with turning private information over to the company.

“The presence of a logo of a trusted agency such as FDIC or even a simple icon showing a lock can make users of online banking feel safe and secure, and it makes them feel that somewhere somebody is looking after their security,” said Rosson.

The researchers said that ingrained trust in authority, or what they call ‘authority heuristic,’ is the reason for disclosure of personal information in such scenarios.

“When interviewed, our study participants attributed their privacy disclosure to the cues more often than other reasons,” said Sundar.

An awareness of major cues that prey on common rules of thumb may make people more savvy web users and could help them avoid placing their private information into the wrong hands.

“The number one reason for doing this study is to increase media literacy among online users,” said Sundar.

He added that the findings could also be used to create alerts that warn users when they encounter these cues.

“People want to do the right thing and they want to protect their privacy, but in the heat of the moment online, they are swayed by these contextual cues,” said Rosson.

“One way to avoid this is to introduce ‘just-in-time’ alerts. Just as users are about to reveal information, an alert could pop up on the site and ask them if they are sure they want to do that. That might give them a bit of a pause to think about that transaction,” she added.

For the study, the researchers recruited 786 people to participate in an online survey. The participants were then asked to review 12 scenarios that they might encounter online and asked to assess their willingness to disclose personal information based on each scenario.

Researchers use AI and create early warning system to identify disinformation online

Researchers at the University of Notre Dame are using artificial intelligence to develop an early warning system that will identify manipulated images, deepfake videos and disinformation online.

identify disinformation online

The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections.

Identify disinformation online: How does it work?

The scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks.

“Memes are easy to create and even easier to share,” said Tim Weninger, associate professor in the Department of Computer Science and Engineering at Notre Dame. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”

Weninger, along with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, and members of the research team collected more than two million images and content from various sources on Twitter and Instagram related to the 2019 general election in Indonesia.

The results of that election, in which the left-leaning, centrist incumbent garnered a majority vote over the conservative, populist candidate, sparked a wave of violent protests that left eight people dead and hundreds injured. Their study found both spontaneous and coordinated campaigns with the intent to influence the election and incite violence.

Those campaigns consisted of manipulated images exhibiting false claims and misrepresentation of incidents, logos belonging to legitimate news sources being used on fabricated news stories and memes created with the intent to provoke citizens and supporters of both parties.

While the ramifications of such campaigns were evident in the case of the Indonesian general election, the threat to democratic elections in the West already exists. The research team said they are developing the system to flag manipulated content to prevent violence, and to warn journalists or election monitors of potential threats in real time.

Providing users with tailored options for monitoring content

The system, which is in the research and development phase, would be scalable to provide users with tailored options for monitoring content. While many challenges remain, such as determining an optimal means of scaling up data ingestion and processing for quick turnaround, Scheirer said the system is currently being evaluated for transition to operational use.

Development is not too far behind when it comes to the possibility of monitoring the 2020 general election in the United States, he said, and their team is already collecting relevant data.

“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another – saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”

How people deal with fake news or misinformation in their social media feeds

Social media platforms, such as Facebook and Twitter, provide people with a lot of information, but it’s getting harder and harder to tell what’s real and what’s not.

deal with fake news

Participants had various reactions to encountering a fake post

Researchers at the University of Washington wanted to know how people investigated potentially suspicious posts on their own feeds. The team watched 25 participants scroll through their Facebook or Twitter feeds while, unbeknownst to them, a Google Chrome extension randomly added debunked content on top of some of the real posts.

Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it.

The research

“We wanted to understand what people do when they encounter fake news or misinformation in their feeds. Do they notice it? What do they do about it?” said senior author Franziska Roesner, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering.

“There are a lot of people who are trying to be good consumers of information and they’re struggling. If we can understand what these people are doing, we might be able to design tools that can help them.”

Previous research on how people interact with misinformation asked participants to examine content from a researcher-created account, not from someone they chose to follow.

“That might make people automatically suspicious,” said lead author Christine Geeng, a UW doctoral student in the Allen School. “We made sure that all the posts looked like they came from people that our participants followed.”

The researchers recruited participants ages 18 to 74 from across the Seattle area, explaining that the team was interested in seeing how people use social media. Participants used Twitter or Facebook at least once a week and often used the social media platforms on a laptop.

Then the team developed a Chrome extension that would randomly add fake posts or memes that had been debunked by the fact-checking website Snopes.com on top of real posts to make it temporarily appear they were being shared by people on participants’ feeds. So instead of seeing a cousin’s post about a recent vacation, a participant would see their cousin share one of the fake stories instead.

The researchers either installed the extension on the participant’s laptop or the participant logged into their accounts on the researcher’s laptop, which had the extension enabled.

The team told the participants that the extension would modify their feeds – the researchers did not say how – and would track their likes and shares during the study – though, in fact, it wasn’t tracking anything. The extension was removed from participants’ laptops at the end of the study.

“We’d have them scroll through their feeds with the extension active,” Geeng said. “I told them to think aloud about what they were doing or what they would do if they were in a situation without me in the room. So then people would talk about ‘Oh yeah, I would read this article,’ or ‘I would skip this.’ Sometimes I would ask questions like, ‘Why are you skipping this? Why would you like that?’”

Participants could not actually like or share the fake posts. A retweet would share the real content beneath the fake post. The one time a participant did retweet content under the fake post, the researchers helped them undo it after the study was over. On Facebook, the like and share buttons didn’t work at all.

The results

After the participants encountered all the fake posts – nine for Facebook and seven for Twitter – the researchers stopped the study and explained what was going on.

“It wasn’t like we said, ‘Hey, there were some fake posts in there.’ We said, ‘It’s hard to spot misinformation. Here were all the fake posts you just saw. These were fake, and your friends did not really post them,’” Geeng said.

“Our goal was not to trick participants or to make them feel exposed. We wanted to normalize the difficulty of determining what’s fake and what’s not.”

The researchers concluded the interview by asking participants to share what types of strategies they use to detect misinformation.

In general, the researchers found that participants ignored many posts, especially those they deemed too long, overly political or not relevant to them.

But certain types of posts made participants skeptical. For example, people noticed when a post didn’t match someone’s usual content. Sometimes participants investigated suspicious posts – by looking at who posted it, evaluating the content’s source or reading the comments below the post – and other times, people just scrolled past them.

“I am interested in the times that people are skeptical but then choose not to investigate. Do they still incorporate it into their worldviews somehow?” Roesner said.

“At the time someone might say, ‘That’s an ad. I’m going to ignore it.’ But then later do they remember something about the content, and forget that it was from an ad they skipped? That’s something we’re trying to study more now.”

While this study was small, it does provide a framework for how people react to misinformation on social media, the team said. Now researchers can use this as a starting point to seek interventions to help people resist misinformation in their feeds.

“Participants had these strong models of what their feeds and the people in their social network were normally like. They noticed when it was weird. And that surprised me a little,” Roesner said.

“It’s easy to say we need to build these social media platforms so that people don’t get confused by fake posts. But I think there are opportunities for designers to incorporate people and their understanding of their own networks to design better social media platforms.”

The rise of human-driven fraud attacks

There has been a major spike in human-driven attacks – which rose 90% compared to six months previously, according to Arkose Labs.

human-driven attacks

Changing attack patterns were felt across geographies and industries, at a time of the year when digital commerce was at its peak.

In Q4 of 2019, advanced, multi-step attacks attempting to evade fraud defenses using a blend of automated and human-driven attacks have been detected. Automated fraud attacks, which grew by 25%, are becoming increasingly complex as fraudsters become more effective at mimicking trusted customer behavior.

While automated attacks are still prevalent across most industries, the notable rise in human-driven attacks is attributed to fraudsters leveraging what Arkose Labs define as “sweatshop-like workers” to enhance attacks.

Sweatshop-driven attack levels increased during high online traffic periods as fraudsters attempted to blend in with legitimate traffic, with peak attack levels 50% higher than seen in Q2 of 2019.

The key countries where human-driven attacks originated from shifted in Q4, showing fraudsters tapping into human farms across the globe to keep costs low and profits high. Sweatshop-driven attacks from Venezuela, Vietnam, Thailand, India and Ukraine grew, while attacks from the Philippines, Russia and Ukraine almost tripled compared to Q2 2019.

“Notable shifts are occurring in today’s threat landscape, with fraudsters no longer looking to make a quick buck and instead opting to play the long game, implementing multi-step attacks that don’t initially reveal their fraudulent intent,” said Kevin Gosschalk, CEO of Arkose Labs.

“Fraudsters are increasingly augmenting their attacks by outsourcing activity to human sweatshop resources, causing a surge in fraud within certain industries such as online gaming and social media.”

Attacks on social media platforms are increasingly human-driven

Due to the volume of rich personal data on social media platforms and high user activity levels, social applications are lucrative targets for fraudsters looking to scrape content, write fake reviews, steal information or disseminate spam and malicious content.

In Q4 of 2019, there was a sharp increase in attack volumes for both social media account registrations and logins. In fact, every two in five login attempts and every one in five new account registrations were fraudulent, making this one of the highest industry attack rates.

The human versus automated attack mix also rose, with more than 50% of social media login attacks being human-driven.

“The elevated rate of human-driven login attacks is supported by organized sweatshops, with fraudsters attempting to hack into legitimate users’ accounts to manipulate or steal credentials and disseminate spam,” explained Vanita Pandey, VP of Marketing and Strategy at Arkose Labs.

“With two in every five social media logins being an attack and more than half of those attacks being human-driven, it’s clear that fraudsters are targeting this customer touchpoint with hopes of downstream monetization.”

Online gaming has emerged as a lucrative channel for fraudsters

As millions increasingly engage in online gaming, the industry has emerged as a prime target for fraudsters across the globe.

Gaming fraud in Q4 of 2019 demonstrated highly sophisticated attack patterns in comparison to other industries, with fraudsters leveraging gaming applications to use stolen payment methods, steal in-game assets, abuse the auction houses and disseminate malicious content.

Fraudsters are using bots to build online gaming account profiles and sell accounts with higher levels and assets, while also targeting online currencies used within select games. Overall, the report found that online gaming attack rates grew 25% last quarter, with most of the growth coming from human-driven attacks on new account registrations and logins.

human-driven attacks

Combating cybercrime requires a zero tolerance approach

Rising human-driven attack rates demonstrate that fraudsters are willing to be creative and invest more in their attacks, often laying the groundwork months in advance using lower cost, automated attacks.

As long as there is money to be made in fraud and businesses continue to tolerate attacks, fraudsters will continue to identify the most effective attack methods to achieve optimal ROI.

“Ultimately, the only sustainable approach to combating cybercrime is adopting a zero tolerance approach that undermines the economic incentives behind fraud. Tolerating fraud as ‘the cost of doing business’ exacerbates the problem long-term,” said Gosschalk.

“To identify the subtle, tell-tale signs that predict downstream fraud, organizations must prioritize in-depth profiling of activity across all customer touchpoints. By combining digital intelligence with targeted friction, large-scale attacks will quickly become unsustainable for fraudsters.”

Social media platforms leave 95% of reported fake accounts up, study finds

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

Enlarge / One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

It’s no secret that every major social media platform is chock-full of bad actors, fake accounts, and bots. The big companies continually pledge to do a better job weeding out organized networks of fake accounts, but a new report confirms what many of us have long suspected: they’re pretty terrible at doing so.

The report comes this week from researchers with the NATO Strategic Communication Centre of Excellence (StratCom). Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.

The research team spent €300 (about $332) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report (PDF) explains. That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers. They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.

About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 percent were still active. So they reported a sample selection of those accounts to the platforms as fraudulent. Then came the most damning statistic: three weeks after being reported as fake, 95 percent of the fake accounts were still active.

“Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behavior on their platforms,” the researchers concluded. “Self-regulation is not working.”

Too big to govern

The social media platforms are fighting a distinctly uphill battle. The scale of Facebook’s challenge, in particular, is enormous. The company boasts 2.2 billion daily users of its combined platforms. Broken down by platform, the original big blue Facebook app has about 2.45 billion monthly active users, and Instagram has more than one billion.

Facebook frequently posts status updates about “removing coordinated inauthentic behavior” from its services. Each of those updates, however, tends to snag between a few dozen and a few hundred accounts, pages, and groups, usually sponsored by foreign actors. That’s barely a drop in the bucket just compared to the 19,000 fake accounts that one research study uncovered from one $300 outlay, let alone the vast ocean of other fake accounts out there in the world.

The issue, however, is both serious and pressing. A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking. But attempted foreign interference in both a crucial national election on the horizon in the UK this month and the high-stakes US federal election next year is all but guaranteed.

The Senate Intelligence Committee’s report (PDF) on social media interference in the 2016 US election is expansive and thorough. The committee determined Russia’s Internet Research Agency (IRA) used social media to “conduct an information warfare campaign designed to spread disinformation and societal division in the United States,” including targeted ads, fake news articles, and other tactics. The IRA used and uses several different platforms, the committee found, but its primary vectors are Facebook and Instagram.

Facebook has promised to crack down hard on coordinated inauthentic behavior heading into the 2020 US election, but its challenges with content moderation are by now legendary. Working conditions for the company’s legions of contract content moderators are terrible, as repeatedly reported—and it’s hard to imagine the number of humans you’d need to review literally trillions of pieces of content posted every day. Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

Facebook, Twitter ban malicious SDK that removed member info

Twitter warned its users that a software development kit (SDK) developed by oneAudience could have allowed that company to obtain account information.

Facebook
also posted a notice concerning not only the oneAudience SDK, but also for fellow
SDK maker Mobiburn.

OneAudience confirmed
the problem and then shut down the SDK along with its associated websites but
said the data was never intended to be collected, never added to its database
and never used.

“Recently,
we were advised that personal information from hundreds of mobile IDs may have
been passed to our oneAudience platform. This data was never intended to be
collected, never added to our database and never used,” oneAudience said in a statement.

OneAudience’s
stated goal was to “help developers earn new revenue by enhancing app user
information into the audience insights advertisers crave.”

In a statement Twitter, which described
the SDK as “malicious”, said the issue was not within its software, but
resulted from a lack of isolation between SDKs within an application. The SDK
itself is normally embedded within a mobile application where it could
potentially exploit a vulnerability to allow information including, email, username
and last Tweet to be accessed. There is even the possibility for an account to
be taken over via the flaw.

OneAudience
said the SDK was updated on November 13, 2019 to stop it from collecting
information and pushed to its partners.

Facebook
took on both developers saying oneAudience and Mobiburn were paying developers
to place malicious SDKs in apps.

“After
investigating, we removed the apps from our platform for violating our platform
policies and issued cease and desist letters against One Audience and Mobiburn,”
Facebook said.

Twitter
determined that the oneAudience SDK only impacted Android devices to access
Twitter.

Facebook and
Twitter are notifying those whose data was affected and Twitter has informed
Apple, Google and other industry partners. about the SDK

The post Facebook, Twitter ban malicious SDK that removed member info appeared first on SC Media.

Most Americans feel powerless to prevent data collection, online tracking

Most U.S. adults say that the potential risks they face because of data collection by companies (81%) and the government (66%) outweigh the benefits, but most (>80%) feel that they have little or no control over how these entities use their personal information, a recent Pew Research Center study on USA digital privacy attitudes has revealed. Interesting discoveries on USA digital privacy attitudes The study has also shown that: 72% of respondents feel that all, … More

The post Most Americans feel powerless to prevent data collection, online tracking appeared first on Help Net Security.