As holiday mobile commerce breaks records, retail apps display security red flags

Driven by the pandemic, many consumers rely on mobile apps to buy everything from daily essentials to holiday gifts. However, according to a recent analysis, there are some alarming security concerns among some of the top 50 Android retail mobile apps. Retail mobile apps are missing basic security functionality Most of the top 50 retail mobile applications analyzed in September 2020 did not apply sufficient code hardening and runtime application self-protection (RASP) techniques. These security … More

The post As holiday mobile commerce breaks records, retail apps display security red flags appeared first on Help Net Security.

A new approach to scanning social media helps combat misinformation

Rice University researchers have discovered a more efficient way for social media companies to keep misinformation from spreading online using probabilistic filters trained with artificial intelligence. Combating misinformation on social media The new approach to scanning social media is outlined in a study presented by Rice computer scientist Anshumali Shrivastava and statistics graduate student Zhenwei Dai. Their method applies machine learning in a smarter way to improve the performance of Bloom filters, a widely used … More

The post A new approach to scanning social media helps combat misinformation appeared first on Help Net Security.

ML tool identifies domains created to promote fake news

Academics at UCL and other institutions have collaborated to develop a machine learning tool that identifies new domains created to promote false information so that they can be stopped before fake news can be spread through social media and online channels.

promote fake news

To counter the proliferation of false information it is important to move fast, before the creators of the information begin to post and broadcast false information across multiple channels.

How does it work?

Anil R. Doshi, Assistant Professor for the UCL School of Management, and his fellow academics set out to develop an early detection system to highlight domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.

Doshi commented: “Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early.

“By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”

By applying a machine-learning model to domain registration data, the tool was able to correctly identify 92 percent of the false information domains and 96.2 percent of the non-false information domains set up in relation to the 2016 US election before they started operations.

Why should it be used?

The researchers propose that their tool should be used to help regulators, platforms, and policy makers proceed with an escalated process in order to increase monitoring, send warnings or sanction them, and decide ultimately, whether they should be shut down.

The academics behind the research also call for social media companies to invest more effort and money into addressing this problem which is largely facilitated by their platforms.

Doshi continued “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey.

“We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening.

“Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”

The research is ongoing in recognition that the environment is constantly evolving and while the tool works well now, the bad actors will respond to it. This underscores the need for constant and ongoing innovation and research in this area.

How fake news detectors can be manipulated

Fake news detectors, which have been deployed by social media platforms like Twitter and Facebook to add warnings to misleading posts, have traditionally flagged online articles as false based on the story’s headline or content.

fake news detectors

However, recent approaches have considered other signals, such as network features and user engagements, in addition to the story’s content to boost their accuracies.

Fake news detectors manipulated through user comments

However, new research from a team at Penn State’s College of Information Sciences and Technology shows how these fake news detectors can be manipulated through user comments to flag true news as false and false news as true. This attack approach could give adversaries the ability to influence the detector’s assessment of the story even if they are not the story’s original author.

“Our model does not require the adversaries to modify the target article’s title or content,” explained Thai Le, lead author of the paper and doctoral student in the College of IST. “Instead, adversaries can easily use random accounts on social media to post malicious comments to either demote a real story as fake news or promote a fake story as real news.”

That is, instead of fooling the detector by attacking the story’s content or source, commenters can attack the detector itself.

The researchers developed a framework – called Malcom – to generate, optimize, and add malicious comments that were readable and relevant to the article in an effort to fool the detector.

Then, they assessed the quality of the artificially generated comments by seeing if humans could differentiate them from those generated by real users. Finally, they tested Malcom’s performance on several popular fake news detectors.

Malcom performed better than the baseline for existing models by fooling five of the leading neural network based fake news detectors more than 93% of the time. To the researchers’ knowledge, this is the first model to attack fake news detectors using this method.

The benefits

This approach could be appealing to attackers because they do not need to follow traditional steps of spreading fake news, which primarily involves owning the content.

The researchers hope their work will help those charged with creating fake news detectors to develop more robust models and strengthen methods to detect and filter-out malicious comments, ultimately helping readers get accurate information to make informed decisions.

“Fake news has been promoted with deliberate intention to widen political divides, to undermine citizens’ confidence in public figures, and even to create confusion and doubts among communities,” the team wrote in their paper.

Added Le, “Our research illustrates that attackers can exploit this dependency on users’ engagement to fool the detection models by posting malicious comments on online articles, and it highlights the importance of having robust fake news detection models that can defend against adversarial attacks.”

Disinformation campaigns can spread like wildfire on social media

76% of Americans believe they’ve encountered disinformation firsthand and 20% say they’ve shared information later shown to be incorrect or intentionally misleading, according to a research released by NortonLifeLock.

disinformation campaigns

Disinformation, or false information intended to mislead or deceive people, is commonly spread by social media users and bots – automated accounts controlled by software – with the intent to sow division among people, create confusion, and undermine confidence in the news surrounding major current events, such as the 2020 U.S. presidential election, COVID-19 and social justice movements.

“Social media has created ideological echo-chambers that make people more susceptible to disinformation,” said Daniel Kats, a senior principal researcher at NortonLifeLock Labs.

“Disinformation campaigns can spread like wildfire on social media and have a long-lasting impact, as people’s opinions and actions may be influenced by the false or misleading information being circulated.”

Fact-checking stop the spread of disinformation

No matter who or what posts the information, fact-checking is a best practice for consumers to help stop the spread of disinformation. According to the online survey of more than 2,000 US adults, 53% of Americans often question whether information they see on social media is disinformation or fact.

86% of Americans agree that disinformation has the ability to greatly influence someone’s opinion, but 58% acknowledge that disinformation could influence them.

Although 82% of Americans are very concerned about the spread of disinformation, 21% still say social media companies do not have the right to remove it from their platform, with Republicans being almost twice as likely as Democrats to feel this way (25% vs. 13%).

“From disinformation campaigns to deepfakes, it’s becoming increasingly difficult for people to tell real from fake online,” added Kats. “It’s important to maintain a healthy dose of skepticism and to fact check multiple sources – especially before sharing something – to help avoid spreading disinformation.”

OPIS

Additional findings

  • More than a third of Americans don’t know the true purpose of disinformation. Only 62% of Americans know that disinformation is created to cause a divide or rift between people; 72% of both Republicans and Democrats believe disinformation is created for political gain.
  • 79% of Americans believe social media companies have an obligation to remove disinformation from their platforms, with the majority of Democrats (87%), Republicans (75%) and Independents (75%) supporting this.
  • Democrats and Republicans disagree on who spreads disinformation the most, with Republicans most commonly stating news media outlets are most likely to spread disinformation (36%), and Democrats stating it’s U.S. politicians (28%).
  • Disinformation has taken a toll on relationships, with many Americans having argued with someone (36%), unfriended/unfollowed someone on social media (30%), or taken a break from social media altogether (28%) because of disinformation.

People spend a little less time looking at fake news headlines than factual ones

The term fake news has been a part of our vocabulary since the 2016 US presidential election. As the amount of fake news in circulation grows larger and larger, particularly in the United States, it often spreads like wildfire. Subsequently, there is an ever-increasing need for fact-checking and other solutions to help people navigate the oceans of factual and fake news that surround us.

fake news headlines

Help may be on the way, via an interdisciplinary field where eye-tracking technology and computer science meet. A study by University of Copenhagen and Aalborg University researchers shows that people’s eyes react differently to factual and false news headlines.

Eyes spend a bit less time on fake news headlines

Researchers placed 55 different test subjects in front of a screen to read 108 news headlines. A third of the headlines were fake. The test subjects were assigned a so-called ‘pseudo-task’ of assessing which of the news items was the most recent. What they didn’t know, was that some of the headlines were fake. Using eye-tracking technology, the researchers analyzed how much time each person spent reading the headlines and how many fixations the person per headline.

“We thought that it would be interesting to see if there’s a difference in the way people read news headlines, depending on whether the headlines are factual or false. This has never been studied. And, it turns out that there is indeed a statistically significant difference,” says PhD fellow and lead author Christian Hansen, of the University of Copenhagen’s Department of Computer Science.

His colleague from the same department, PhD fellow Casper Hansen, adds: “The study demonstrated that our test subjects’ eyes spent less time on false headlines and fixated on them a bit less compared with the headlines that were true. All in all, people gave fake news headlines a little less visual attention, despite their being unaware that the headlines were fake.”

The computer scientists can’t explain for the difference, nor do they dare make any guesses. Nevertheless, they were surprised by the result.

The researchers used the results to create an algorithm that can predict whether a news headline is fake based on eye movements.

Could support fact-checking

As a next step, the researchers would like to examine whether it is possible to measure the same differences in eye movements on a larger scale, beyond the lab – preferably using ordinary webcams or mobile phone cameras. It will, of course, require that people allow for access to their cameras.

The two computer scientists imagine that eye-tracking technology could eventually help with the fact-checking of news stories, all depending upon their ability to collect data from people’s reading patterns. The data could come from news aggregator website users or from the users of other sources, e.g., Feedly and Google News, as well as from social media, like Facebook and Twitter, where the amount of fake news is large as well.

“Professional fact-checkers in the media and organizations need to read through lots of material just to find out what needs to be fact-checked. A tool to help them prioritize material could be of great help,” concludes Christian Hansen.

91% of cybersecurity pros want stricter internet measures to tackle misinformation

There’s a growing unease amongst the cybersecurity community around the recent rise in misinformation and fake domains, Neustar reveals.

cybersecurity misinformation

48% of cybersecurity professionals regard the increase in misinformation as a threat to the enterprise, with 49% ranking the threat as ‘very significant’. In response, 46% of organizations already have plans in place to ensure greater emphasis on their ability to react to the rise of misinformation and fake domains.

An additional 35% said it will be a focus area for them in the next six months, while 13% would consider it if it continues to be an issue.

“Misinformation is by no means new – from the beginning of time it has been used as a key tactic by people trying to achieve major goals with limited means,” said Rodney Joffe, Chairman of NISC, Senior Vice President and Fellow at Neustar.

“The current global pandemic, however, has led to a sharp uptick in misinformation and the registration of fake domains, with cybercriminals using tactics such as phishing, scams and ransomware to spread misleading news, falsified evidence and incorrect advice. While the motives of malicious actors may differ, the erosion of trust caused by misinformation poses a range of ethical, social and technological challenges to organizations.”

The complexity of misinformation

In spite of these current anxieties, solving the problem of misinformation is complex. Only 36% of security execs are very confident with their organization’s ability to successfully identify misinformation and fake domains.

Underlining these concerns, 91% respondents stated that stricter measures should be implemented on the internet if the recent surge in misinformation and fake domains continues.

“Organizations must be vigilant when it comes to assessing how their brand is being used to spread potentially damaging misinformation,” Joffe continued.

“On an open internet, where people can freely register domains and spread information via social media, organizations need to build global taskforces specialising in monitoring and shutting down fake domains and false information. This will involve deploying an always-on approach and using intelligent threat data to measure and mitigate the risk.”

Cyberattacks maintaining an upward trend

Findings from the latest NISC research also highlighted a steep 12-point increase on the International Cyber Benchmarks Index year-on-year. Calculated based on the changing level of threat and impact of cyberattacks, the Index has maintained an upward trend since May 2017.

During May – June 2020, DDoS attacks (23%) and system compromise (20%) were ranked as the greatest concerns to cybersecurity professionals, followed by ransomware (18%) and intellectual property (15%). During this period, organizations focused most on increasing their ability to respond to vendor or customer impersonation, targeted hacking and DDoS attacks.

Content farms develop and spread fake news about COVID-19 for profit

​RiskIQ​ released a research report revealing a large-scale digital scam advertisement campaign spread through fraudulent news sites and affiliate ad networks that cater to highly partisan audiences.

fake news COVID-19

Scammers are taking advantage of COVID-19 to spread fake news

The report details how misleading, false, and inflammatory news stories about the COVID-19 pandemic are developed on a massive scale by “content farms,” which monetize through ads served by ad networks targeting highly partisan readership. Some of these ads are purpose-built to lure readers into misleading ‘subscription traps’ for products billed as remedies or cures for the virus.

How does a subscription trap work?

A subscription trap works by offering a free or deeply discounted trial of a product while hiding clauses in the terms of service that sign victims up for costly payments remitted on a repeated basis, usually monthly. These subscriptions are often difficult, if not impossible, to escape.

The report clearly defines an ecosystem between partisan content farms that monetize through ad revenue, ad networks that take a cut of the profit, and advertisers that use the generated traffic to ensnare victims in subscription traps. These traps fraudulent subscriptions are for products such as dietary supplements or beauty products, and more recently, supposed remedies to COVID-19 in the form of CBD oil.

“Scam ads leading to subscription traps seem to be endemic to content farm sites, but there’s a particular network of companies and individuals using the COVID-19 pandemic for financial gain,” said Jordan Herman, threat researcher, RiskIQ.

“We wanted to do a deep dive into this ecosystem to expose how these shady practices are taking advantage of people on a massive scale and making the schemers a lot of money in the process.”

Leveraging fear, anxiety, and uncertainty around COVID-19

These content farms generate traffic by creating politically charged articles leveraging the fear, anxiety, and uncertainty around COVID-19 and gearing them toward a specific audience. These articles, often misleading or patently false, target readers the creators have assessed will likely read, share, and engage with them.

The content farm operators publish these articles on their websites, which use social media accounts and spam email campaigns to further their reach and generate more traffic they can monetize.

Ways AI could be used to facilitate crime over the next 15 years

Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report. The study identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern – based on the harm they could cause, the potential for criminal profit or gain, how easy … More

The post Ways AI could be used to facilitate crime over the next 15 years appeared first on Help Net Security.

Investigation highlights the dangers of using counterfeit Cisco switches

An investigation, which concluded that counterfeit network switches were designed to bypass processes that authenticate system components, illustrates the security challenges posed by counterfeit hardware.

counterfeit Cisco switches

The suspected counterfeit switch (on the left) has port numbers in bright white, while the known genuine device has them in grey. The text itself is misaligned. The triangles indicating different ports are different shapes.

Counterfeit Cisco Catalyst 2960-X series switches

F-Secure Consulting’s Hardware Security team investigated two different counterfeit versions of Cisco Catalyst 2960-X series switches. The counterfeits were discovered by an IT company after a software update stopped them from working, which is a common reaction of forged/modified hardware to new software. At the company’s request, researchers performed a thorough analysis of the counterfeits to determine the security implications.

The investigators found that while the counterfeits did not have any backdoor-like functionality, they did employ various measures to fool security controls. For example, one of the units exploited what the research team believes to be a previously undiscovered software vulnerability to undermine secure boot processes that provide protection against firmware tampering.

“We found that the counterfeits were built to bypass authentication measures, but we didn’t find evidence suggesting the units posed any other risks,” said Dmitry Janushkevich, a senior consultant with F-Secure Consulting’s Hardware Security team, and lead author of the report. “The counterfeiters’ motives were likely limited to making money by selling the components. But we see motivated attackers use the same kind of approach to stealthily backdoor companies, which is why it’s important to thoroughly check any modified hardware.”

The counterfeits were physically and operationally similar to an authentic Cisco switch. One of the unit’s engineering suggests that the counterfeiters either invested heavily in replicating Cisco’s original design or had access to proprietary engineering documentation to help them create a convincing copy.

According to F-Secure Consulting’s Head of Hardware Security Andrea Barisani, organizations face considerable security challenges in trying to mitigate the security implications of sophisticated counterfeits such as the those analyzed in the report.

“Security departments can’t afford to ignore hardware that’s been tampered with or modified, which is why they need to investigate any counterfeits that they’ve been tricked into using,” explained Barisani. “Without tearing down the hardware and examining it from the ground up, organizations can’t know if a modified device had a larger security impact. And depending on the case, the impact can be major enough to completely undermine security measures intended to protect an organization’s security, processes, infrastructure, etc.”

How to ensure you’re not using counterfeit components

F-Secure has the following advice to help organizations prevent themselves from using counterfeit components:

  • Source all your components from authorized resellers.
  • Have clear internal processes and policies that governing procurement processes.
  • Ensure all components run the latest available software provided by vendors.
  • Make note of even physical differences between different units of the same product, no matter how subtle they may be.

Fake “DNS Update” emails targeting site owners and admins

Attackers are trying to trick web administrators into sharing their admin account login credentials by urging them to activate DNSSEC for their domain.

fake DNS update

Scam emails lead to fake login pages

The scam was spotted by Sophos researchers, when the admin(s) of their own security marketing blog received an email impersonating WordPress and urging them to click on a link to perform the activation (see screenshot above).

The link took them to a “surprisingly believable” phishing page with logos and icons that matched their service provider (WordPress VIP), and instructed them to enter their WordPress account username and password to start the update.

“The scam then shows you some fake but believable progress messages to make you think that a genuine ‘site upgrade’ has kicked off, including pretending to perform some sort of digital ‘file signing’ at the end,” Sophos’s security proselytiser Paul Ducklin explained.

Finally, either intentionally or by mistake, the victim is redirected to a 404 error page.

Customized phishing pages

The malicious link in the email contained encoded banner and URL information that allowed researchers (and attackers) to customize the scam phishing page with different logos, to impersonate numerous different hosting providers.

“We didn’t even need to guess at the banner names that we could use, because the crooks had left the image directory browsable on their phishing site. In total, the crooks had 98 different ripped-off brand images ready to go, all the way from Akamai to Zen Cart,” Ducklin noted.

The attackers check HTTP headers for information about the target’s hosting provider and customize the scam email and the phishing site accordingly:

fake DNS update

Users who fall for the scam, enter their login credentials into the phishing site and don’t have 2-factor authentication turned on are effectively handing control of their site to the scammers.

Ducklin advises admins never to log in anywhere through links sent via email, to urn on 2FA whenever they can, and to use a password manager.

Password managers not only pick strong and random passwords automatically, but also associate each password with a specific URL. That makes it much harder to put the right password into the wrong site, because the password manager simply won’t know which account to use when faced with an unknown phishing site,” he noted.

80% of consumers trust a review platform more if it displays fake reviews

Many people are using COVID-19 quarantine to get projects done at home, meaning plenty of online shopping for tools and supplies. But do you buy blind? Research shows 97% of consumers consult product reviews before making a purchase.

fake reviews

Fake reviews are a significant threat for online review portals and product search engines given the potential for damage to consumer trust. Little is known about what review portals should do with fraudulent reviews after detecting them.

A research looks at how consumers respond to potentially fraudulent reviews and how review portals can leverage this information to design better fraud management policies.

“We find consumers have more trust in the information provided by review portals that display fraudulent reviews alongside nonfraudulent reviews, as opposed to the common practice of censoring suspected fraudulent reviews,” said Beibei Li of Carnegie Mellon University.

“The impact of fraudulent reviews on consumers’ decision-making process increases with the uncertainty in the initial evaluation of product quality.”

Fake reviews aid decision making

A study conducted by Li alongside Michael Smith, also of Carnegie Mellon University, and Uttara Ananthakrishnan of the University of Washington, says consumers do not effectively process the content of fraudulent reviews, whether it’s positive or negative. This result makes the case for incorporating fraudulent reviews and doing it in the form of a score to aid consumers’ decision making.

Fraudulent reviews occur when businesses artificially inflate ratings of their own products or artificially lower the ratings of a competitor’s product by generating fake reviews, either directly or through paid third parties.

“The growing interest in online product reviews for legitimate promotion has been accompanied by an increase in fraudulent reviews,” continued Li. “Research shows about 15%-30% of all online reviews are estimated to be fraudulent by various media and industry reports.”

Platforms don’t have a common way to handle fraudulent reviews. Some delete fraudulent reviews (Google), some publicly acknowledge censoring fake reviews (Amazon), while other portals, such as Yelp, go one step further by making the fraudulent reviews visible to the public with a notation that it is potentially fraudulent.

This study used large-scale data from Yelp to conduct experiments to measure trust and found 80% of the users in our survey agree they trust a review platform more if it displays fake review information because businesses are less likely to write fraud reviews on these platforms.

Transparency over censorship

Meanwhile, 85% of users in our survey believe they should have a choice in viewing truthful and fraudulent information and the platforms should leave the choice to consumers to decide whether they use fraudulent review information in determining the quality of a business.

The study also finds that consumers tend to trust the information provided by platforms more when the platform distinguished and displayed fraudulent reviews from nonfraudulent reviews, as compared to the more common practice of censoring suspected fraudulent reviews.

“Our results highlight the importance of transparency over censorship and may have implications for public policy. Just as there are strong incentives to fraudulently manipulate consumer beliefs pertaining to commerce, there are also strong incentives to fraudulently manipulate individual beliefs pertaining to public policy decisions,” concluded Li.

When this fraudulent activity information is made available to all consumers, platforms can effectively embed a built-in penalty for businesses that are caught writing fake reviews.

A platform may admit to users that there is fraud on its site, but that is balanced by an increase in trust from consumers who already suspected that some reviews may be fraudulent and now see that something is being done to address it.

Researchers develop a way to quickly purge old network data

Researchers from North Carolina State University and the Army Research Office have demonstrated a new model of how competing pieces of information spread in online social networks and the Internet of Things. The findings could be used to disseminate accurate information more quickly, displacing false information about anything from computer security to public health.

purge old network data

“Whether in the IoT or on social networks, there are many circumstances where old information is circulating and could cause problems – whether it’s old security data or a misleading rumor,” says Wenye Wang, co-author of a paper on the work and a professor of electrical and computer engineering at NC State.

“Our work here includes a new model and related analysis of how new data can displace old data in these networks.”

“Ultimately, our work can be used to determine the best places to inject new data into a network so that the old data can be eliminated faster,” says Jie Wang, a postdoctoral researcher at NC State and first author of the paper.

Does network size matter?

In their paper, the researchers show that a network’s size plays a significant role in how quickly “good” information can displace “bad” information. However, a large network is not necessarily better or worse than a small one. Instead, the speed at which good data travels is primarily affected by the network’s structure.

A highly interconnected network can disseminate new data very quickly. And the larger the network, the faster the new data will travel.

However, in networks that are connected primarily by a limited number of key nodes, those nodes serve as bottlenecks. As a result, the larger this type of network is, the slower the new data will travel.

The researchers also identified an algorithm that can be used to assess which point in a network would allow you to spread new data throughout the network most quickly.

“Practically speaking, this could be used to ensure that an IoT network purges old data as quickly as possible and is operating with new, accurate data,” Wenye Wang says.

“But these findings are also applicable to online social networks, and could be used to facilitate the spread of accurate information regarding subjects that affect the public,” says Jie Wang. “For example, we think it could be used to combat misinformation online.”

Researchers use AI and create early warning system to identify disinformation online

Researchers at the University of Notre Dame are using artificial intelligence to develop an early warning system that will identify manipulated images, deepfake videos and disinformation online.

identify disinformation online

The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections.

Identify disinformation online: How does it work?

The scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks.

“Memes are easy to create and even easier to share,” said Tim Weninger, associate professor in the Department of Computer Science and Engineering at Notre Dame. “When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm.”

Weninger, along with Walter Scheirer, an assistant professor in the Department of Computer Science and Engineering at Notre Dame, and members of the research team collected more than two million images and content from various sources on Twitter and Instagram related to the 2019 general election in Indonesia.

The results of that election, in which the left-leaning, centrist incumbent garnered a majority vote over the conservative, populist candidate, sparked a wave of violent protests that left eight people dead and hundreds injured. Their study found both spontaneous and coordinated campaigns with the intent to influence the election and incite violence.

Those campaigns consisted of manipulated images exhibiting false claims and misrepresentation of incidents, logos belonging to legitimate news sources being used on fabricated news stories and memes created with the intent to provoke citizens and supporters of both parties.

While the ramifications of such campaigns were evident in the case of the Indonesian general election, the threat to democratic elections in the West already exists. The research team said they are developing the system to flag manipulated content to prevent violence, and to warn journalists or election monitors of potential threats in real time.

Providing users with tailored options for monitoring content

The system, which is in the research and development phase, would be scalable to provide users with tailored options for monitoring content. While many challenges remain, such as determining an optimal means of scaling up data ingestion and processing for quick turnaround, Scheirer said the system is currently being evaluated for transition to operational use.

Development is not too far behind when it comes to the possibility of monitoring the 2020 general election in the United States, he said, and their team is already collecting relevant data.

“The disinformation age is here,” said Scheirer. “A deepfake replacing actors in a popular film might seem fun and lighthearted but imagine a video or a meme created for the sole purpose of pitting one world leader against another – saying words they didn’t actually say. Imagine how quickly that content could be shared and spread across platforms. Consider the consequences of those actions.”

How people deal with fake news or misinformation in their social media feeds

Social media platforms, such as Facebook and Twitter, provide people with a lot of information, but it’s getting harder and harder to tell what’s real and what’s not.

deal with fake news

Participants had various reactions to encountering a fake post

Researchers at the University of Washington wanted to know how people investigated potentially suspicious posts on their own feeds. The team watched 25 participants scroll through their Facebook or Twitter feeds while, unbeknownst to them, a Google Chrome extension randomly added debunked content on top of some of the real posts.

Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it.

The research

“We wanted to understand what people do when they encounter fake news or misinformation in their feeds. Do they notice it? What do they do about it?” said senior author Franziska Roesner, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering.

“There are a lot of people who are trying to be good consumers of information and they’re struggling. If we can understand what these people are doing, we might be able to design tools that can help them.”

Previous research on how people interact with misinformation asked participants to examine content from a researcher-created account, not from someone they chose to follow.

“That might make people automatically suspicious,” said lead author Christine Geeng, a UW doctoral student in the Allen School. “We made sure that all the posts looked like they came from people that our participants followed.”

The researchers recruited participants ages 18 to 74 from across the Seattle area, explaining that the team was interested in seeing how people use social media. Participants used Twitter or Facebook at least once a week and often used the social media platforms on a laptop.

Then the team developed a Chrome extension that would randomly add fake posts or memes that had been debunked by the fact-checking website Snopes.com on top of real posts to make it temporarily appear they were being shared by people on participants’ feeds. So instead of seeing a cousin’s post about a recent vacation, a participant would see their cousin share one of the fake stories instead.

The researchers either installed the extension on the participant’s laptop or the participant logged into their accounts on the researcher’s laptop, which had the extension enabled.

The team told the participants that the extension would modify their feeds – the researchers did not say how – and would track their likes and shares during the study – though, in fact, it wasn’t tracking anything. The extension was removed from participants’ laptops at the end of the study.

“We’d have them scroll through their feeds with the extension active,” Geeng said. “I told them to think aloud about what they were doing or what they would do if they were in a situation without me in the room. So then people would talk about ‘Oh yeah, I would read this article,’ or ‘I would skip this.’ Sometimes I would ask questions like, ‘Why are you skipping this? Why would you like that?’”

Participants could not actually like or share the fake posts. A retweet would share the real content beneath the fake post. The one time a participant did retweet content under the fake post, the researchers helped them undo it after the study was over. On Facebook, the like and share buttons didn’t work at all.

The results

After the participants encountered all the fake posts – nine for Facebook and seven for Twitter – the researchers stopped the study and explained what was going on.

“It wasn’t like we said, ‘Hey, there were some fake posts in there.’ We said, ‘It’s hard to spot misinformation. Here were all the fake posts you just saw. These were fake, and your friends did not really post them,’” Geeng said.

“Our goal was not to trick participants or to make them feel exposed. We wanted to normalize the difficulty of determining what’s fake and what’s not.”

The researchers concluded the interview by asking participants to share what types of strategies they use to detect misinformation.

In general, the researchers found that participants ignored many posts, especially those they deemed too long, overly political or not relevant to them.

But certain types of posts made participants skeptical. For example, people noticed when a post didn’t match someone’s usual content. Sometimes participants investigated suspicious posts – by looking at who posted it, evaluating the content’s source or reading the comments below the post – and other times, people just scrolled past them.

“I am interested in the times that people are skeptical but then choose not to investigate. Do they still incorporate it into their worldviews somehow?” Roesner said.

“At the time someone might say, ‘That’s an ad. I’m going to ignore it.’ But then later do they remember something about the content, and forget that it was from an ad they skipped? That’s something we’re trying to study more now.”

While this study was small, it does provide a framework for how people react to misinformation on social media, the team said. Now researchers can use this as a starting point to seek interventions to help people resist misinformation in their feeds.

“Participants had these strong models of what their feeds and the people in their social network were normally like. They noticed when it was weird. And that surprised me a little,” Roesner said.

“It’s easy to say we need to build these social media platforms so that people don’t get confused by fake posts. But I think there are opportunities for designers to incorporate people and their understanding of their own networks to design better social media platforms.”

Some commercial password managers vulnerable to attack by fake apps

Security experts recommend using a complex, random and unique password for every online account, but remembering them all would be a challenging task. That’s where password managers come in handy.

password managers vulnerable

Encrypted vaults are accessed by a single master password or PIN, and they store and autofill credentials for the user. However, researchers at the University of York have shown that some commercial password managers (depending on the version) may not be a watertight way to ensure cybersecurity.

After creating a malicious app to impersonate a legitimate Google app, they were able to fool two out of five of the password managers they tested into giving away a password.

What is the weakness?

The research team found that some of the password managers used weak criteria for identifying an app and which username and password to suggest for autofill. This weakness allowed the researchers to impersonate a legitimate app simply by creating a rogue app with an identical name.

Senior author of the study, Dr Siamak Shahandashti from the Department of Computer Science at the University of York, said: “Vulnerabilities in password managers provide opportunities for hackers to extract credentials, compromising commercial information or violating employee information. Because they are gatekeepers to a lot of sensitive information, rigorous security analysis of password managers is crucial.

“Our study shows that a phishing attack from a malicious app is highly feasible – if a victim is tricked into installing a malicious app it will be able to present itself as a legitimate option on the autofill prompt and have a high chance of success.”

“In light of the vulnerabilities in some commercial password managers our study has exposed, we suggest they need to apply stricter matching criteria that is not merely based on an app’s purported package name.”

“I am not aware of the different ways a password manager could properly identify an app so not to fall victim to this kind of attack. But it does remind me of concerns we’ve had a long time about alternative keyboard apps getting access to anything you type on your phone or tablet,” Per Thorsheim, founder of PasswordsCon, told Help Net Security.

“The risk presented with autofill on compromised websites pertains only to the site’s credentials, not the user’s entire vault. It is always in the user’s best interest to enable MFA for all online accounts, including LastPass, since it can protect them further,” a LastPass spokesperson told us via email.

“While continued efforts from the web and Android communities will also be required, we have already implemented changes to our LastPass Android app to mitigate and minimize the risk of the potential attack detailed in this report. Our app requires explicit user approval before filling any unknown apps, and we’ve increased the integrity of our app associations database in order to minimize the risk of any “fake apps” being filled/accepted.”

Other vulnerabilities

The researchers also discovered some password managers did not have a limit on the number of times a master PIN or password could be entered. This means that if hackers had access to an individual’s device they could launch a “brute force” attack, guessing a four digit PIN in around 2.5 hours.

The researchers also drew up a list of vulnerabilities identified in a previous study and tested whether they had been resolved. They found that while the most serious of these issues had been fixed, many had not been addressed.

Some issues have been fixed long ago

The researchers disclosed these vulnerabilities to the companies developing those password managers.

Lead author of the study, Michael Carr, said: “New vulnerabilities were found through extensive testing and responsibly disclosed to the vendors. Some were fixed immediately while others were deemed low priority. More research is needed to develop rigorous security models for password managers, but we would still advise individuals and companies to use them as they remain a more secure and useable option. While it’s not impossible, hackers would have to launch a fairly sophisticated attack to access the information they store.”

Commenting on this research for Help Net Security, Jeffrey Goldberg, Chief Defender Against the Dark Arts at 1Password, said: “Academic research of this nature can be misread by the public. The versions of 1Password that were examined in that study were from June and July 2017. As is the convention for such research, the researchers talked to us before making their findings public and gave us the opportunity to fix things that needed to be fixed. The research, and publication of it now, does have real value both to developers password managers and for future examination of password managers, but given its historical nature, it is not a very useful guide to the general public in accessing the current state of password manager security.”

Tiny cryptographic ID chip can help combat hardware counterfeiting

To combat supply chain counterfeiting, which can cost companies billions of dollars annually, MIT researchers have invented a cryptographic ID tag that’s small enough to fit on virtually any product and verify its authenticity.

supply chain counterfeiting

A 2018 report from the Organization for Economic Co-operation and Development estimates about $2 trillion worth of counterfeit goods will be sold worldwide in 2020. That’s bad news for consumers and companies that order parts from different sources worldwide to build products. Counterfeiters tend to use complex routes that include many checkpoints, making it challenging to verifying their origins and authenticity. Consequently, companies can end up with imitation parts.

Wireless ID tags are becoming increasingly popular for authenticating assets as they change hands at each checkpoint. But these tags come with various size, cost, energy, and security tradeoffs that limit their potential.

Popular radio-frequency identification (RFID) tags, for instance, are too large to fit on tiny objects such as medical and industrial components, automotive parts, or silicon chips. RFID tags also contain no tough security measures.

Some tags are built with encryption schemes to protect against cloning and ward off hackers, but they’re large and power hungry. Shrinking the tags means giving up both the antenna package — which enables radio-frequency communication — and the ability to run strong encryption.

In a paper the researchers presented at the IEEE International Solid-State Circuits Conference, they describe an ID chip that navigates all those tradeoffs. It’s millimeter-sized and runs on relatively low levels of power supplied by photovoltaic diodes.

It also transmits data at far ranges, using a power-free “backscatter” technique that operates at a frequency hundreds of times higher than RFIDs. Algorithm optimization techniques also enable the chip to run a popular cryptography scheme that guarantees secure communications using extremely low energy.

“We call it the ‘tag of everything.’ And everything should mean everything,” says co-author Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Science and head of the Terahertz Integrated Electronics Group in the Microsystems Technology Laboratories (MTL).

“If I want to track the logistics of, say, a single bolt or tooth implant or silicon chip, current RFID tags don’t enable that. We built a low-cost, tiny chip without packaging, batteries, or other external components, that stores and transmits sensitive data.”

Joining Han on the paper are: graduate students Mohamed I. Ibrahim, Muhammad Ibrahim Wasiq Khan, and Chiraag S. Juvekar; former postdoc associate Wanyeong Jung; former postdoc Rabia Tugce Yazicigil; and Anantha P. Chandrakasan, who is the dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Solving the problem of size

The work began as a means of creating better RFID tags. The team wanted to do away with packaging, which makes the tags bulky and increases manufacturing cost.

They also wanted communication in the high terahertz frequency between microwave and infrared radiation — around 100 gigahertz and 10 terahertz — that enables chip integration of an antenna array and wireless communications at greater reader distances.

Finally, they wanted cryptographic protocols because RFID tags can be scanned by essentially any reader and transmit their data indiscriminately.

But including all those functions would normally require building a fairly large chip. Instead, the researchers came up with “a pretty big system integration,” Ibrahim says, that enabled putting everything on a monolithic — meaning, not layered — silicon chip that was only about 1.6 square millimeters.

One innovation is an array of small antennas that transmit data back and forth via backscattering between the tag and reader. Backscatter, used commonly in RFID technologies, happens when a tag reflects an input signal back to a reader with slight modulations that correspond to data transmitted.

In the researchers’ system, the antennas use some signal splitting and mixing techniques to backscatter signals in the terahertz range. Those signals first connect with the reader and then send data for encryption.

Implemented into the antenna array is a “beam steering” function, where the antennas focus signals toward a reader, making them more efficient, increasing signal strength and range, and reducing interference. This is the first demonstration of beam steering by a backscattering tag, according to the researchers.

Tiny holes in the antennas allow light from the reader to pass through to photodiodes underneath that convert the light into about 1 volt of electricity. That powers up the chip’s processor, which runs the chip’s “elliptic-curve-cryptography” (ECC) scheme.

ECC uses a combination of private keys (known only to a user) and public keys (disseminated widely) to keep communications private. In the researchers’ system, the tag uses a private key and a reader’s public key to identify itself only to valid readers. That means any eavesdropper who doesn’t possess the reader’s private key should not be able to identify which tag is part of the protocol by monitoring just the wireless link.

Optimizing the cryptographic code and hardware lets the scheme run on an energy-efficient and small processor, Yazicigil says. “It’s always a tradeoff,” she says. “If you tolerate a higher-power budget and larger size, you can include cryptography. But the challenge is having security in such a small tag with a low-power budget.”

Pushing the signal range limits

Currently, the signal range sits around 5 centimeters, which is considered a far-field range — and allows for convenient use of a portable tag scanner. Next, the researchers hope to “push the limits” of the range even further, Ibrahim says.

Eventually, they’d like many of the tags to ping one reader positioned somewhere far away in, say, a receiving room at a supply chain checkpoint. Many assets could then be verified rapidly.

“We think we can have a reader as a central hub that doesn’t have to come close to the tag, and all these chips can beam steer their signals to talk to that one reader,” Ibrahim says.

The researchers also hope to fully power the chip through the terahertz signals themselves, eliminating any need for photodiodes.

The chips are so small, easy to make, and inexpensive that they can also be embedded into larger silicon computer chips, which are especially popular targets for counterfeiting.

Lack of .GOV validation and HTTPS leaves states susceptible to voter disinformation campaigns

There’s a severe lack of U.S. government .GOV validation and HTTPS encryption among county election websites in 13 states projected to be critical in the 2020 U.S. Presidential Election, a McAfee survey reveals.

election website security

Example of what a fraudulent email might look like

Malicious actors could establish false government websites

The survey found that as many as 83.3% of these county websites lacked .GOV validation across these states, and 88.9% and 90.0% of websites lacked such certification in Iowa and New Hampshire respectively.

Such shortcomings could make it possible for malicious actors to establish false government websites and use them to spread false election information that could influence voter behavior and even impact final election results.

“Without a governing body validating whether websites truly belong to the government entities they claim, it’s possible to spoof legitimate government sites with fraudulent ones,” said Steve Grobman, McAfee Senior Vice President and CTO.

“An adversary can use fake election websites for misinformation and voter suppression by targeting specific voters in swing states with misleading information on candidates, or inaccurate information on the voting process such as poll location and times.

“In this way, this malicious actor could impact election results without ever physically or digitally interacting with voting machines or systems.”

Lack of governing authority preventing .COM, .NET, .ORG, and .US domain names purchase

Government entities purchasing .GOV web domains have submitted evidence to the U.S. government that they truly are the legitimate local, county, or state governments they claimed to be.

Websites using .COM, .NET, .ORG, and .US domain names can be purchased without such validation, meaning that there is no governing authority preventing malicious parties from using these names to set up and promote any number of fraudulent web domains mimicking legitimate county government domains.

The HTTPS encryption measure assures citizens that any voter registration information shared with the site is encrypted, and that they can give greater confidence in the entity with which they are sharing that information.

Websites lacking .GOV and encryption cannot assure voters seeking election information that they are visiting legitimate county and county election websites, leaving malicious actors an opening to set up disinformation schemes.

“In many cases, these websites have been set up to provide a strong user experience versus a focus on the implications that they could be spoofed to exploit the communities they serve,” Grobman continued.

“Malicious actors can pass off fake election websites and mislead large numbers of voters before detection by government organizations. A campaign close to election day could confuse voters and prevent votes from being cast, resulting in missing votes or overall loss of confidence in the democratic system.”

State counties lacking .GOV validation

Of the 1,117 counties in the survey group, 83.3% of their websites lack .GOV validation. Minnesota ranked the lowest among the surveyed states in terms of .GOV website validation with 95.4% of counties lacking U.S. government certification.

Other states severely lacking in .GOV coverage included Texas (94.9%), New Hampshire (90.0%), Michigan (89.2%), Iowa (88.9%), Nevada (87.5%), and Pennsylvania (83.6%).

Arizona had the highest percentage of main county websites validated by .GOV with 66.7% coverage, but even this percentage suggests that a third of the Grand Canyon State’s county websites are unvalidated and that hundreds of thousands of voters could still be subjected to disinformation schemes.

State counties lacking HTTPS protection

The survey found that 46.6% of county websites lack HTTPS encryption. Texas ranked the lowest in terms of encryption with 77.2% of its county websites failing to protect citizens visiting these web properties. Other states with counties lacking in encryption included Pennsylvania (46.3%), Minnesota (42.5%), and Georgia (38.4%).

Assessment of Iowa and New Hampshire

In Iowa, 88.9% of county websites lack .GOV validation, and as many as 29.3% lack HTTPS encryption. Ninety percent of New Hampshire’s county websites lack .GOV validation, and as many as 30% of the Granite State’s counties lack encryption.

Inconsistent naming standards

The research found that some states attempted to establish standard naming standards, such as www.co.[county name].[two-letter state abbreviation].us. Unfortunately, these formats were followed so inconsistently that a voter seeking election information from her county website cannot be confident that a web domain following such a standard is indeed a legitimate site.

Easy-to-remember naming formats

The research found 103 cases in which counties set up easy-to-remember, user-friendly domain names to make their election information easier to remember and access for the broadest possible audience of citizens.

Examples include www.votedenton.com, www.votestanlycounty.com, www.carrollcountyohioelections.gov, www.voteseminole.org, and www.worthelections.com.

While 93 of these counties (90.2%) protected voters visiting these sites with encryption, only two validated these special domains and websites with .GOV. This suggests that malicious parties could easily set up numerous websites with similarly named domains to spoof these legitimate sites.

Strategies for transitioning to .GOV

While only 19.3% of Ohio’s 88 county main websites have .GOV validation, the state leads McAfee’s survey with 75% of county election websites and webpages validated by .GOV certification. This leadership position appears to be the result of a state-led initiative to transition county election-related content to .GOV validated web properties.

A majority of counties have subsequently transitioned their main county websites to .GOV domains, their election-specific websites to .GOV domains, or their election-specific webpages to Ohio’s own .GOV-validated ohio.gov domain.

Such a .GOV transition strategy constitutes an interim solution until more comprehensive efforts are made at the state and federal government level through initiatives such as The DOTGOV Act of 2020. This legislation would require the Department of Homeland Security (DHS) to support .GOV adoption for local governments with technical guidance and financial support.

“Ohio has made a commendable effort to lead in driving election websites to .GOV, either directly or by using the state run ohio.gov domain,” said Grobman.

“While main county websites still largely lack .GOV validation, Ohio does provide a mechanism for voters to quickly assess if the main election website is real or potentially fake. Other states should consider such interim strategies until all county and local websites with election functions can be fully transitioned to .GOV.”

2020: A year of deepfakes and deep deception

Over the past year, deepfakes, a machine learning model that is used to create realistic yet fake or manipulated audio and video, started making headlines as a major emerging cyber threat. The first examples of deepfakes seen by the general public were mainly amateur videos created using free deepfake tools, typically of celebrities’ faces superimposed into pornographic videos.

deepfake technology

Even though these videos were of fairly low quality and could be reasonably distinguished as illegitimate, people understood the potential impact this new technology could have on our ability to separate fact from fiction. This is especially of concern in the world of politics, where deepfake technology can be weaponized against political figures or parties to manipulate public perception and sway elections or even the stock market.

A few years ago, deepfake technology would be limited to use by nation-states with the resources and advanced technology needed to develop such a powerful tool. Now, because deepfake toolkits are freely available and easy to learn, anyone with internet access, time, and motive can churn out deepfake videos in real-time and flood social media channels with fake content.

Also, as the toolkits become smarter, they require less material to work from to generate fake content. The earlier generation of tools required hours of video and audio – big data sets – for the machine to analyze and then manipulate. This meant people in the spotlight such as politicians, celebrities, high-profile CEOs or anyone with a large web presence had a higher chance of being spoofed. Now, a video can be fabricated from a single photo. In the future, where all it takes is your Facebook profile image and an audio soundbite of you from an Instagram story, everybody becomes a target.

The reality of non-reality

Deepfakes are so powerful because they subvert a basic human understanding of reality: if you see it and hear it, it must be real. Deepfakes untether truth from reality. They also elicit an emotional response. If you see something upsetting, and then find out it was fake, you have still had a negative emotional reaction take place and make subconscious associations with what you saw and how you feel.

This October, Governor Gavin Newsom signed California’s AB 730, known as the “Anti-Deepfake Bill,” into law with the intent to quell the spread of malicious deepfakes before the 2020 election. While a laudable effort, the law itself falls flat. It places an artificial timeline that only applies to deepfake content distributed with “actual malice” within 60 days of an election. It exempts distribution platforms from the responsibility to monitor and remove deepfake content and instead relies on producers of the videos to self-identify and claim ownership, and the burden of proof for “actual malice” will not be a clear-cut process.

This law was likely not designed with the primary purpose to be enforced, but more likely serve as the first step by lawmakers to show that they understand deepfakes are a serious threat to democracy and this is a battle that is just beginning. Ideally, this law will influence and inform other state and federal efforts and serve as a starting template to build upon with more effective and enforceable legislation.

Deepfake technology as a business threat in 2020

To date, most of the discussion around deepfake technology has been centered around its potential for misinformation campaigns and mass manipulation fueled through social media, especially in the realm of politics. 2020 will be the year we start to see deepfakes become a real threat to the enterprise, one that cyber defense teams are not yet equipped to handle.

Spearphishing targets high-level employees, typically to trick them into completely a manual task such as paying a fake invoice, sending physical documents, or manually resetting a user’s credentials for the cybercriminal. These tend to be more difficult to detect from a technology perspective, as the email doesn’t contain any suspicious links or attachments and is commonly used in conjunction with a BEC attack (when hackers gain control of an employees’ email account, allowing them to send emails from legitimate addresses). According to the FBI, BEC attacks have cost organizations worldwide more than $26 billion over the past three years.

Deepfakes have the ability to supercharge these attacks. Imagine receiving an email from your company’s CEO asking you to engage with some financial action, then receiving a follow-up text message from your CEO’s mobile number, and finally a voicemail with their voice addressing you by name, referencing previous conversations you’ve had with them, and all in the CEO’s voice.

There comes a point where the attack breaks the truth-barrier and it makes more sense to accept the request as real and authentic than to consider the possibility that it’s fake. Eventually, when the deepfake technology advances even further, it’s easy to imagine a scenario where you are on a video call with what you think is your CEO but is a deepfake video being created in real-time. Earlier this year a CEO was deceived by an AI-generated voice into transferring $243,000 to a bank account he believed to be of a company supplier.

Currently, the security industry has no appliances, email filters, or any technology to defend against deepfakes. There is progress being made, however. For example, Facebook, Microsoft, and university researchers launched the Deepfake Detection Challenge as a rallying cry to jumpstart the development of open source deepfake detection tools. The Defense Advanced Research Projects Agency (DARPA) made an announcement for the Semantic Forensics or SemaFor program which aims to develop “semantic forensics” as an additional method of defense to the statistical detection techniques used in the past.

The only remedy that currently exists is to educate users about these new types of attacks and to be on the alert for any behavior that seems out of the ordinary from the recipient, no matter how small. Trust is no longer a luxury we can afford.

Cybercriminals using fake job listings to steal money, info from applicants

Be extra careful when looking for a job online, the Internet Crime Complaint Center (IC3) warns: cybercriminals are using fake job listings to trick applicants into sharing their personal and financial information, as well as into sending them substantial sums of money.

fake job listings

“While hiring scams have been around for many years, cyber criminals’ emerging use of spoofed websites to harvest PII and steal money shows an increased level of complexity. Criminals often lend credibility to their scheme by advertising alongside legitimate employers and job placement firms, enabling them to target victims of all skill and income levels,” they noted.

A tech-savvy take on consumer confidence schemes

Individuals targeted by these cyber crooks have a lot to lose. According to the advisory, victims can end up some $3,000 out of pocket (on average), in addition of getting their PII and payment card information compromised.

“The PII can be used for any number of nefarious purposes, including taking over the victims’ accounts, opening new financial accounts, or using the victims’ identity for another deception scam (such as obtaining fake driver’s licenses or passports),” the IC3 explained. Ultimately, this could also end up damaging their credit scores.

“What’s interesting is that this is just a tech-savvy take on a typical consumer confidence scheme,” says Bob Jones, senior advisor with Shared Assessments.

“The players in any scheme are the con artist and the mark. The con artist concocts a story that sounds real enough to cause the mark to believe it. The suspension of disbelief resulting from the mark’s confidence in the story leads to a successful scam. The more sophisticated the potential mark population, the more elaborate the scheme has to be.”

How to spot fake job listings and avoid becoming a victim

The best defense to becoming a victim is a healthy skepticism, Jones says.

IC3 advises job seekers to always conduct a web search of the hiring company using the company name only.

“Results that return multiple websites for the same company (abccompany.com and abccompanyllc.com) may indicate fraudulent job listings,” they explained.

In addition to this, they should not share their PII, Social Security number or payment info before getting hired. They should also confirm that the person they are interviewing with (online) are actually who they say they are and are working at the firm.

Indicators that the job listing may be fake include them appearing on job boards but not on the companies’ websites; recruiters/managers not having profiles on the job board, or having profiles that do not fit their roles; and them contacting victims through non-company email domains and teleconference applications that use email addresses instead of phone numbers.

“While this is a consumer con, it shares the same elements as those deployed against enterprises in phishing schemes,” Jones notes.

“Unsuspecting employees click on links attached to emails they think are from their boss or colleague, thus infecting the network. Firms that use fake phishing emails as an awareness training tool to vaccinate their employees with a healthy dose of skepticism also perform a public service, because those employees are all consumers.”