• Skip to main content

ITSecurity.org

Technology Security Controls

  • Main
  • Products
  • Services
    • Compliance-Services
      • ISO27001 Compliance
      • ISO22301 Compliance
      • ISO27002 Compliance
      • Data-Protection
      • GDPR
      • PCI-DSS Services
    • Identity and Access Management Services
      • IAM Design
      • IAM Policies & Standards
    • Incident Management Services
      • Emergency Incident Response
      • Forensic Support
      • Incident Response
    • Information Security Services
      • Information Security Consultancies
      • Information Security Governance Services
      • Information Security Policies & Standards
    • IT Risk Management Services
      • Risk Management Framework
      • Auditing
    • IT Security Consulting Services
      • IT Security Governance Services
      • IT Security Policies and Standards
    • Additional Security Services
      • Managed Security Services
      • Mobile Security
      • Network Security Services
    • Physical Security Services
      • Physical Security Reviews
    • Policies and Standards Services
    • Programme and Project Services
    • Risk Management Services
      • Risk Management – Framework
      • Risk Management Acceptance & Waivers
    • Security Awareness Services
      • Security Awareness – Phishing Responses
      • Phishing Responses
      • Security Awareness Training – Rebranded Security Training
      • Security Awareness Training – Generic
    • Security Design Services
      • All Security Design and Architectural Services
      • Cloud Security Review
      • Security Appliance Design and Configuration
    • Security Metrics Services
    • Technical Security Assessment Services
      • Penetration Testing – Our Penetration Test Services
      • Database Security – Databases and Repositories
      • Application Security Code Testing
      • Application Security Services
    • Third-Party and Supplier Assurance Services
      • Third and Supplier Party Assurance Methodology
      • Third and Supplier Party Assurance Review
      • Joint Venture Due Diligence
  • Security Digest
  • FAQ
  • Contact Us

Instagram

Study shows which messengers leak your data, drain your battery, and more

October 26, 2020 by ITSecurity.Org Ltd

Stock photo of man using smartphone.

Enlarge
Getty Images

Link previews are a ubiquitous feature found in just about every chat and messaging app, and with good reason. They make online conversations easier by providing images and text associated with the file that’s being linked.

Unfortunately, they can also leak our sensitive data, consume our limited bandwidth, drain our batteries, and, in one case, expose links in chats that are supposed to be end-to-end encrypted. Among the worst offenders, according to research published on Monday, were messengers from Facebook, Instagram, LinkedIn, and Line. More about that shortly. First a brief discussion of previews.

When a sender includes a link in a message, the app will display the conversation along with text (usually a headline) and images that accompany the link. It usually looks something like this:

Enlarge

For this to happen, the app itself—or a proxy designated by the app—has to visit the link, open the file there, and survey what’s in it. This can open users to attacks. The most severe are those that can download malware. Other forms of malice might be forcing an app to download files so big they cause the app to crash, drain batteries, or consume limited amounts of bandwidth. And in the event the link leads to private materials—say, a tax return posted to a private OneDrive or DropBox account—the app server has an opportunity to view and store it indefinitely.

The researchers behind Monday’s report, Talal Haj Bakry and Tommy Mysk, found that Facebook Messenger and Instagram were the worst offenders. As the chart below shows, both apps download and copy a linked file in its entirety—even if it’s gigabytes in size. Again, this may be a concern if the file is something the users want to keep private.

Link Previews: Instagram servers download any link sent in Direct Messages even if it’s 2.6GB.

It’s also problematic because the apps can consume vast amounts of bandwidth and battery reserves. Both apps also run any JavaScript contained in the link. That’s a problem because users have no way of vetting the security of JavaScript and can’t expect messengers to have the same exploit protections modern browsers have.

Link Previews: How hackers can run any JavaScript code on Instagram servers.

Haj Bakry and Mysk reported their findings to Facebook, and the company said that both apps work as intended. LinkedIn performed only slightly better. Its only difference was that, rather than copying files of any size, it copied only the first 50 megabytes.

Meanwhile, when the Line app opens an encrypted message and finds a link, it appears to send the link to the Line server to generate a preview. “We believe that this defeats the purpose of end-to-end encryption, since LINE servers know all about the links that are being sent through the app, and who’s sharing which links to whom,” Haj Bakry and Mysk wrote.

Discord, Google Hangouts, Slack, Twitter, and Zoom also copy files, but they cap the amount of data at anywhere from 15MB to 50MB. The chart below provides a comparison of each app in the study.

Enlarge
Talal Haj Bakry and Tommy Mysk

All in all, the study is good news because it shows that most messaging apps are doing things right. For instance, Signal, Threema, TikTok, and WeChat all give the users the option of receiving no link preview. For truly sensitive messages and users who want as much privacy as possible, this is the best setting. Even when previews are provided, these apps are using relatively safe means to render them.

Still, Monday’s post is a good reminder that private messages aren’t always, well, private.

“Whenever you’re building a new feature, always keep in mind what sort of privacy and security implications it may have, especially if this feature is going to be used by thousands or even millions of people around the world,” the researchers wrote. “Link previews are a nice feature that users generally benefit from, but here we’ve showcased the wide range of problems this feature can have when privacy and security concerns aren’t carefully considered.”

Filed Under: Biz & IT, Facebook, Instagram, instant message, IT Security, Messenger, Policy, Privacy, Security

Social media platforms leave 95% of reported fake accounts up, study finds

December 6, 2019 by admin

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

Enlarge / One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.
SAUL LOEB | AFP | Getty Images

It’s no secret that every major social media platform is chock-full of bad actors, fake accounts, and bots. The big companies continually pledge to do a better job weeding out organized networks of fake accounts, but a new report confirms what many of us have long suspected: they’re pretty terrible at doing so.

The report comes this week from researchers with the NATO Strategic Communication Centre of Excellence (StratCom). Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.

The research team spent €300 (about $332) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report (PDF) explains. That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers. They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.

About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 percent were still active. So they reported a sample selection of those accounts to the platforms as fraudulent. Then came the most damning statistic: three weeks after being reported as fake, 95 percent of the fake accounts were still active.

“Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behavior on their platforms,” the researchers concluded. “Self-regulation is not working.”

Too big to govern

The social media platforms are fighting a distinctly uphill battle. The scale of Facebook’s challenge, in particular, is enormous. The company boasts 2.2 billion daily users of its combined platforms. Broken down by platform, the original big blue Facebook app has about 2.45 billion monthly active users, and Instagram has more than one billion.

Facebook frequently posts status updates about “removing coordinated inauthentic behavior” from its services. Each of those updates, however, tends to snag between a few dozen and a few hundred accounts, pages, and groups, usually sponsored by foreign actors. That’s barely a drop in the bucket just compared to the 19,000 fake accounts that one research study uncovered from one $300 outlay, let alone the vast ocean of other fake accounts out there in the world.

The issue, however, is both serious and pressing. A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking. But attempted foreign interference in both a crucial national election on the horizon in the UK this month and the high-stakes US federal election next year is all but guaranteed.

The Senate Intelligence Committee’s report (PDF) on social media interference in the 2016 US election is expansive and thorough. The committee determined Russia’s Internet Research Agency (IRA) used social media to “conduct an information warfare campaign designed to spread disinformation and societal division in the United States,” including targeted ads, fake news articles, and other tactics. The IRA used and uses several different platforms, the committee found, but its primary vectors are Facebook and Instagram.

Facebook has promised to crack down hard on coordinated inauthentic behavior heading into the 2020 US election, but its challenges with content moderation are by now legendary. Working conditions for the company’s legions of contract content moderators are terrible, as repeatedly reported—and it’s hard to imagine the number of humans you’d need to review literally trillions of pieces of content posted every day. Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

Filed Under: Bots, Facebook, fake accounts, Instagram, IT Security, Policy, social media, Twitter, Youtube