Malware may trick biologists into generating dangerous toxins in their labs

An end-to-end cyber-biological attack, in which unwitting biologists may be tricked into generating dangerous toxins in their labs, has been discovered by Ben-Gurion University of the Negev researchers.

cyber-biological attack

Malware could replace physical contact

According to a paper, it is currently believed that a criminal needs to have physical contact with a dangerous substance to produce and deliver it. However, malware could easily replace a short sub-string of the DNA on a bioengineer’s computer so that they unintentionally create a toxin producing sequence.

“To regulate both intentional and unintentional generation of dangerous substances, most synthetic gene providers screen DNA orders which is currently the most effective line of defense against such attacks,” says Rami Puzis, head of the BGU Complex Networks Analysis Lab, a member of the Department of Software and Information Systems Engineering and [email protected]. California was the first state in 2020 to introduce gene purchase regulation legislation.

“However, outside the state, bioterrorists can buy dangerous DNA, from companies that do not screen the orders,” Puzis says. “Unfortunately, the screening guidelines have not been adapted to reflect recent developments in synthetic biology and cyberwarfare.”

A weakness in the U.S. Department of Health and Human Services (HHS) guidance for DNA providers allows screening protocols to be circumvented using a generic obfuscation procedure which makes it difficult for the screening software to detect the toxin producing DNA.

“Using this technique, our experiments revealed that that 16 out of 50 obfuscated DNA samples were not detected when screened according to the ‘best-match’ HHS guidelines,” Puzis says.

The synthetic DNA supply chain needs hardening

The researchers also found that accessibility and automation of the synthetic gene engineering workflow, combined with insufficient cybersecurity controls, allow malware to interfere with biological processes within the victim’s lab, closing the loop with the possibility of an exploit written into a DNA molecule.

The DNA injection attack demonstrates a significant new threat of malicious code altering biological processes. Although simpler attacks that may harm biological experiments exist, we’ve chosen to demonstrate a scenario that makes use of multiple weaknesses at three levels of the bioengineering workflow: software, biosecurity screening, and biological protocols. This scenario highlights the opportunities for applying cybersecurity know-how in new contexts such as biosecurity and gene coding.

“This attack scenario underscores the need to harden the synthetic DNA supply chain with protections against cyber-biological threats,” Puzis says.

“To address these threats, we propose an improved screening algorithm that takes into account in vivo gene editing. We hope this paper sets the stage for robust, adversary resilient DNA sequence screening and cybersecurity-hardened synthetic gene production services when biosecurity screening will be enforced by local regulations worldwide.

Using drones to improve 5G network security

The introduction of 5G will change the way we communicate, multiply the capacity of the information highways, and allow everyday objects to connect to each other in real time.

drones 5G

Its deployment constitutes a true technological revolution not without some security hazards. Until 5G technology has definitively expanded, some challenges remain to be resolved, including those concerning possible eavesdropping, interference and identity theft.

Unmanned Aerial Vehicles (UAV), also known as drones, are emerging as enablers for supporting many applications and services, such as precision agriculture, search and rescue, or in the field of communications, for temporary network deployment and their coverage extension and security.

Giovanni Geraci, a researcher with the Department of Information and Communication Technologies (DTIC) at UPF, points out in a recent study: “On the one hand, it is important to protect the network when it is disturbed by a drone that has connected and generates interference. On the other, in the future, the same drones could assist in the prevention, detection, and recovery of attacks on 5G networks”.

The study poses two different cases

First, the use of UAVs to prevent possible attacks, still in its early stages of research, and, secondly, how to protect the network when disturbed by a drone, a much more realistic, as Geraci explains: “A drone could be the source of interference to users. This can happen if the drone is very high up and when its transmissions travel a long distance because there are no obstacles in the way, such as buildings”.

The integration of UAV devices in future mobile networks may expose the latter to potential risks of attack based on UAVs. UAVs with cellular connection may experience radio propagation characteristics that are probably different from those experienced by a terrestrial user.

Once a UAV flies well above the base stations, they can create interference or even rogue applications, such as a mobile phone connected to a UAV without authorization.

Using drones to improve 5G security

Based on the premise that 5G terrestrial networks will never be 100% secure, the authors of this study also suggest using UAVs to improve 5G network security and beyond wireless access.

“In particular, in our research we have considered jamming, identity theft, or ‘spoofing’, eavesdropping, and the mitigation mechanisms that are enabled by the versatility of UAVs”, the researchers explain.

The study shows several areas in which the diversity and 3D mobility of UAVs can effectively improve the security of advanced wireless networks against eavesdropping, interference and ‘spoofing’, before they occur or for rapid detection and recovery.

“The article raises open questions and research directions, including the need for experimental evaluation and a research platform for prototyping and testing the proposed technologies”, Geraci explains.

Researchers bring deep learning to IoT devices

Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat.

deep learning IoT

MIT researchers have developed a system that could bring deep learning neural networks to new – and much smaller – places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the IoT.

The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.

The lead author is Ji Lin, a PhD student in Song Han‘s lab in MIT’s Department of Electrical Engineering and Computer Science (EECS).

The Internet of Things

The IoT was born in the early 1980s. Grad students at Carnegie Mellon University, including Mike Kazar ’78, connected a Cola-Cola machine to the internet. The group’s motivation was simple: laziness.

They wanted to use their computers to confirm the machine was stocked before trekking from their office to make a purchase. It was the world’s first internet-connected appliance. “This was pretty much treated as the punchline of a joke,” says Kazar, now a Microsoft engineer. “No one expected billions of devices on the internet.”

Since that Coke machine, everyday objects have become increasingly networked into the growing IoT. That includes everything from wearable heart monitors to smart fridges that tell you when you’re low on milk.

IoT devices often run on microcontrollers – simple computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are difficult to run locally on IoT devices. For complex analysis, IoT-collected data is often sent to the cloud, making it vulnerable to hacking.

“How do we deploy neural nets directly on these tiny devices? It’s a new research area that’s getting very hot,” says Han. “Companies like Google and ARM are all working in this direction.” Han is too.

With MCUNet, Han’s group codesigned two components needed for “tiny deep learning” – the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component: TinyNAS, a neural architecture search algorithm.

System-algorithm codesign

Designing a deep network for microcontrollers isn’t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. While the method works, it’s not the most efficient.

“It can work pretty well for GPUs or smartphones,” says Lin. “But it’s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”

So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.”

The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller – with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin.

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight – instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller.

“It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.

The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time.

“We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.”

In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM.

TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.

MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images — the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. “Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”

The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual “wake-word” tasks, where a user initiates an interaction with a computer using vocal cues (think: “Hey, Siri”) or simply by entering a room. The experiments highlight MCUNet’s adaptability to numerous applications.

Huge potential

The promising test results give Han hope that it will become the new industry standard for microcontrollers. “It has huge potential,” he says.

The advance “extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,” says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.”

MCUNet could also make IoT devices more secure. “A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”

Analyzing data locally reduces the risk of personal information being stolen — including personal health data. Han envisions smart watches with MCUNet that don’t just sense users’ heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information.

MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.

Plus, MCUNet’s slim computing footprint translates into a slim carbon footprint. “Our big dream is for green AI,” says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy.

“Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,” says Han.

Even the world’s freest countries aren’t safe from internet censorship

The largest collection of public internet censorship data ever compiled shows that even citizens of what are considered the world’s freest countries aren’t safe from internet censorship.

internet censorship

A team from the University of Michigan used its own Censored Planet tool, an automated censorship tracking system launched in 2018, to collect more than 21 billion measurements over 20 months in 221 countries.

“We hope that the continued publication of Censored Planet data will enable researchers to continuously monitor the deployment of network interference technologies, track policy changes in censoring nations, and better understand the targets of interference,” said Roya Ensafi, U-M assistant professor of electrical engineering and computer science who led the development of the tool.

Poland blocked human rights sites, India same-sex dating sites

Ensafi’s team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the world’s freest by Freedom House, a nonprofit that advocates for democracy and human rights.

They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan.

While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here.

“When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic,” she said. “The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.”

It’s already happening abroad, the researchers found.

“What we see from our study is that no country is completely free,” said Ram Sundara Raman, U-M doctoral candidate in computer science and engineering and first author of the study. “We’re seeing that many countries start with legislation that compels ISPs to block something that’s obviously bad like child pornography or pirated content.

“But once that blocking infrastructure is in place, governments can block any websites they choose, and it’s a very opaque process. That’s why censorship measurement is crucial, particularly continuous measurements that show trends over time.”

Norway, for example–tied with Finland and Sweden as the world’s freest country, according to Freedom House–passed laws requiring ISPs to block some gambling and pornography content beginning in early 2018.

Censored Planet, however, uncovered that ISPs in Norway are imposing what the study calls “extremely aggressive” blocking across a broader range of content, including human rights websites like Human Rights Watch and online dating sites like Match.com.

Similar tactics show up in other countries, often in the wake of large political events, social unrest or new laws. News sites like The Washington Post and The Wall Street Journal, for example, were aggressively blocked in Japan when Osaka hosted the G20 international economic summit in June 2019.

News, human rights and government sites saw a censorship spike in Poland after protests in July 2019, and same-sex dating sites were aggressively blocked in India after the country repealed laws against gay sex in September 2018.

Censored Planet releases technical details for researchers, activists

The researchers say the findings show the effectiveness of Censored Planet’s approach, which turns public internet servers into automated sentries that can monitor and report when access to websites is being blocked.

Running continuously, it takes billions of automated measurements and then uses a series of tools and filters to analyze the data and tease out trends.

The study also makes public technical details about the workings of Censored Planet that Raman says will make it easier for other researchers to draw insights from the project’s data, and help activists make more informed decisions about where to focus.

“It’s very important for people who work on circumvention to know exactly what’s being censored on which network and what method is being used,” Ensafi said. “That’s data that Censored Planet can provide, and tech experts can use it to devise circumventions.”

internet censorship

Censored Planet’s constant, automated monitoring is a departure from traditional approaches that rely on volunteers to collect data manually from inside countries.

Manual monitoring can be dangerous, as volunteers may face reprisals from governments. Its limited scope also means that efforts are often focused on countries already known for censorship, enabling nations that are perceived as freer to fly under the radar.

While censorship efforts generally start small, Raman says they could have big implications in a world that is increasingly dependent on the internet for essential communication needs.

“We imagine the internet as a global medium where anyone can access any resource, and it’s supposed to make communication easier, especially across international borders,” he said. “We find that if this continues, that won’t be true anymore. We fear this could lead to a future where every country has a completely different view of the internet.”

Researchers break Intel SGX by creating $30 device to control CPU voltage

Researchers at the University of Birmingham have managed to break Intel SGX, a set of security functions used by Intel processors, by creating a $30 device to control CPU voltage.

break Intel SGX

Break Intel SGX

The work follows a 2019 project, in which an international team of researchers demonstrated how to break Intel’s security guarantees using software undervolting. This attack, called Plundervolt, used undervolting to induce faults and recover secrets from Intel’s secure enclaves.

Intel fixed this vulnerability in late 2019 by removing the ability to undervolt from software with microcode and BIOS updates.

Taking advantage of a separate voltage regulator chip

But now, a team in the University’s School of Computer Science has created a $30 device, called VoltPillager, to control the CPU’s voltage – thus side-stepping Intel’s fix. The attack requires physical access to the computer hardware – which is a relevant threat for SGX enclaves that are often assumed to protect against a malicious cloud operator.

The bill of materials for building VoltPillager is:

  • Teensy 4.0 Development Board: $22
  • Bus Driver/ Buffer * 2: $1
  • SOT IC Adapter * 2: $13 for 6

break Intel SGX

How to build Voltpillager Board

This research takes advantage of the fact that there is a separate voltage regulator chip to control the CPU voltage. VoltPillager connects to this unprotected interface and precisely controls the voltage. The research show that this hardware undervolting can achieve the same (and more) as Plundervolt.

Zitai Chen, a PhD student in Computer Security at the University of Birmingham, says: “This weakness allows an attacker, if they have control of the hardware, to breach SGX security. Perhaps it might now be time to rethink the threat model of SGX. Can it really protect against malicious insiders or cloud providers?”

ML tool identifies domains created to promote fake news

Academics at UCL and other institutions have collaborated to develop a machine learning tool that identifies new domains created to promote false information so that they can be stopped before fake news can be spread through social media and online channels.

promote fake news

To counter the proliferation of false information it is important to move fast, before the creators of the information begin to post and broadcast false information across multiple channels.

How does it work?

Anil R. Doshi, Assistant Professor for the UCL School of Management, and his fellow academics set out to develop an early detection system to highlight domains that were most likely to be bad actors. Details contained in the registration information, for example, whether the registering party is kept private, are used to identify the sites.

Doshi commented: “Many models that predict false information use the content of articles or behaviours on social media channels to make their predictions. By the time that data is available, it may be too late. These producers are nimble and we need a way to identify them early.

“By using domain registration data, we can provide an early warning system using data that is arguably difficult for the actors to manipulate. Actors who produce false information tend to prefer remaining hidden and we use that in our model.”

By applying a machine-learning model to domain registration data, the tool was able to correctly identify 92 percent of the false information domains and 96.2 percent of the non-false information domains set up in relation to the 2016 US election before they started operations.

Why should it be used?

The researchers propose that their tool should be used to help regulators, platforms, and policy makers proceed with an escalated process in order to increase monitoring, send warnings or sanction them, and decide ultimately, whether they should be shut down.

The academics behind the research also call for social media companies to invest more effort and money into addressing this problem which is largely facilitated by their platforms.

Doshi continued “Fake news which is promoted by social media is common in elections and it continues to proliferate in spite of the somewhat limited efforts social media companies and governments to stem the tide and defend against it. Our concern is that this is just the start of the journey.

“We need to recognise that it is only a matter of time before these tools are redeployed on a more widespread basis to target companies, indeed there is evidence of this already happening.

“Social media companies and regulators need to be more engaged in dealing with this very real issue and corporates need to have a plan in place to quickly identify when they become the target of this type of campaign.”

The research is ongoing in recognition that the environment is constantly evolving and while the tool works well now, the bad actors will respond to it. This underscores the need for constant and ongoing innovation and research in this area.

Holiday gifts getting smarter, but creepier when it comes to privacy and security

A Hamilton Beach Smart Coffee Maker that could eavesdrop, an Amazon Halo fitness tracker that measures the tone of your voice, and a robot-building kit that puts your kid’s privacy at risk are among the 37 creepiest holiday gifts of 2020 according to Mozilla.

holiday gifts privacy

Researchers reviewed 136 popular connected gifts available for purchase in the United States across seven categories: toys & games; smart home; entertainment; wearables; health & exercise; pets; and home office.

They combed through privacy policies, pored over product and app features, and quizzed companies in order to answer questions like: Can this product’s camera, microphone, or GPS snoop on me? What data does the device collect and where does it go? What is the company’s known track record for protecting users’ data?”

The guide includes a “Best Of” category, which singles out products that get privacy and security right, while a “Privacy Not Included” warning icon alerts consumers when a product has especially problematic privacy practices.

Meeting minimum security standards

It also identifies which products meet Mozilla’s Minimum Security Standards, such as using encryption and requiring users to change the default password if a password is needed. For the first time, Mozilla also notes which products use AI to make decisions about consumers.

“Holiday gifts are getting ‘smarter’ each year: from watches that collect more and more health data, to drones with GPS, to home security cameras connected to the cloud,” said Ashley Boyd, Mozilla’s Vice President of Advocacy.

“Unfortunately, these gifts are often getting creepier, too. Poor security standards and privacy practices can mean that your connected gift isn’t bringing joy, but rather prying eyes and security vulnerabilities.”

Boyd added: “Privacy Not Included helps consumers prioritize privacy and security when shopping. The guide also keeps companies on their toes, calling out privacy flaws and applauding privacy features.”

What are the products?

37 products were branded with a “Privacy Not Included” warning label including: Amazon Halo, Dyson Pure Cool, Facebook Portal, Hamilton Beach Smart Coffee Maker, Livescribe Smartpens, NordicTrack T Series Treadmills, Oculus Quest 2 VR Sets, Schlage Encode Smart WiFi Deadbolt, Whistle Go Dog Trackers, Ubtech Jimu Robot Kits, Roku Streaming Sticks, and The Mirror

22 products were awarded “Best Of” for exceptional privacy and security practices, including: Apple Homepod, Apple iPad, Apple TV 4K, Apple Watch 6, Apple Air Pods & Air Pods Pro, Arlo Security Cams, Arlo Video Doorbell, Eufy Security Cams, Eufy Video Doorbell, iRobot Roomba i Series, iRobot Roomba s Series, Garmin Forerunner Series, Garmin Venu watch, Garmin Index Smart Scale, Garmin Vivo Series, Jabra Elite Active 85T, Kano Coding Kits, Withings Thermo, Withings Body Smart Scales, Petcube Play 2 & Bites 2, Sonos SL One, and Findster Duo+ GPS pet tracker

A handful of leading brands, like Apple, Garmin, and Eufy, are excelling at improving privacy across their product lines, while other top companies, like Amazon, Huawei, and Roku, are consistently failing to protect consumers.

Apple products don’t share or sell your data. They take special care to make sure your Siri requests aren’t associated with you. And after facing backlash in 2019, Apple doesn’t automatically opt-in users to human voice review.

Eufy Security Cameras are especially trustworthy. Footage is stored locally rather than in the cloud, and is protected by military-grade encryption. Further, Eufy doesn’t sell their customer lists.

Roku is a privacy nightmare. The company tracks just about everything you do — and then shares it widely. Roku shares your personal data with advertisers and other third parties, it targets you with ads, it builds profiles about you, and more.

Amazon’s Halo Fitness Tracker is especially troubling. It’s packed full of sensors and microphones. It uses machine learning to measure the tone, energy, and positivity of your voice. And it asks you to take pictures of yourself in your underwear so it can track your body fat.

Tech companies want a monopoly on your smart products

Big companies like Amazon and Google are offering a family of networked devices, pushing consumers to buy into one company. For instance: Nest users now have to migrate over to a Google-only platform. Google is acquiring Fitbit.

And Amazon recently announced it’s moving into the wearable technology space. These companies realize that the more data they have on people’s lives, the more lucrative their products can be.

Products are getting creepier, even as they get more secure

Many companies — especially big ones like Google and Facebook — are improving security. But that doesn’t mean those products aren’t invasive. Smart speakers, watches, and other devices are reaching farther into our lives, monitoring our homes, bodies, and travel. And often, consumers don’t have insight or control over the data that’s collected.

Connected toys and pet products are particularly creepy. Amazon’s KidKraft Kitchen & Market is made for kids as young as three — but there’s no transparency into what data it collects. Meanwhile, devices like the Dogness iPet Robot put a mobile, internet-connected camera and microphone in your house — without using encryption.

The pandemic is reshaping some data sharing for the better. Products like the Oura Ring and Kinsa smart thermometer can share anonymized data with researchers and scientists to help track public health and coronavirus outbreaks. This is a positive development — data sharing for the public interest, not just profit.

Developing a quantum network that exchanges information across long distances by using photons

Researchers at the University of Rochester and Cornell University have taken an important step toward developing a communications network that exchanges information across long distances by using photons, mass-less measures of light that are key elements of quantum computing and quantum communications systems.

quantum network

Each pillar serves as a location marker for a quantum state that can interact with photons. Credit: University of Rochester illustration / Michael Osadciw

The research team has designed a nanoscale node made out of magnetic and semiconducting materials that could interact with other nodes, using laser light to emit and accept photons.

The development of such a quantum network -designed to take advantage of the physical properties of light and matter characterized by quantum mechanics – promises faster, more efficient ways to communicate, compute, and detect objects and materials as compared to networks currently used for computing and communications.

The node consists of an array of pillars a mere 120 nanometers high. The pillars are part of a platform containing atomically thin layers of semiconductor and magnetic materials.

The array is engineered so that each pillar serves as a location marker for a quantum state that can interact with photons and the associated photons can potentially interact with other locations across the device–and with similar arrays at other locations.

This potential to connect quantum nodes across a remote network capitalizes on the concept of entanglement, a phenomenon of quantum mechanics that, at its very basic level, describes how the properties of particles are connected at the subatomic level.

“This is the beginnings of having a kind of register, if you like, where different spatial locations can store information and interact with photons,” says Nick Vamivakas, professor of quantum optics and quantum physics at Rochester.

Toward ‘miniaturizing a quantum computer’

The project builds on work the Vamivakas Lab has conducted in recent years using tungsten diselenide (WSe2) in so-called Van der Waals heterostructures. That work uses layers of atomically thin materials on top of each other to create or capture single photons.

The new device uses a novel alignment of WSe2 draped over the pillars with an underlying, highly reactive layer of chromium triiodide (CrI3). Where the atomically thin, 12-micron area layers touch, the CrI3 imparts an electric charge to the WSe2, creating a “hole” alongside each of the pillars.

In quantum physics, a hole is characterized by the absence of an electron. Each positively charged hole also has a binary north/south magnetic property associated with it, so that each is also a nanomagnet

When the device is bathed in laser light, further reactions occur, turning the nanomagnets into individual optically active spin arrays that emit and interact with photons. Whereas classical information processing deals in bits that have values of either 0 or 1, spin states can encode both 0 and 1 at the same time, expanding the possibilities for information processing.

“Being able to control hole spin orientation using ultrathin and 12-micron large CrI3, replaces the need for using external magnetic fields from gigantic magnetic coils akin to those used in MRI systems,” says lead author and graduate student Arunabh Mukherjee. “This will go a long way in miniaturizing a quantum computer based on single hole spins. ”

Still to come: Entanglement at a distance?

Two major challenges confronted the researchers in creating the device.

One was creating an inert environment in which to work with the highly reactive CrI3. This was where the collaboration with Cornell University came into play.

“They have a lot of expertise with the chromium triiodide and since we were working with that for the first time, we coordinated with them on that aspect of it,” Vamivakas says. For example, fabrication of the CrI3 was done in nitrogen-filled glove boxes to avoid oxygen and moisture degradation.

The other challenge was determining just the right configuration of pillars to ensure that the holes and spin valleys associated with each pillar could be properly registered to eventually link to other nodes.

And therein lies the next major challenge: finding a way to send photons long distances through an optical fiber to other nodes, while preserving their properties of entanglement.

“We haven’t yet engineered the device to promote that kind of behavior,” Vamivakas says. “That’s down the road.”

How fake news detectors can be manipulated

Fake news detectors, which have been deployed by social media platforms like Twitter and Facebook to add warnings to misleading posts, have traditionally flagged online articles as false based on the story’s headline or content.

fake news detectors

However, recent approaches have considered other signals, such as network features and user engagements, in addition to the story’s content to boost their accuracies.

Fake news detectors manipulated through user comments

However, new research from a team at Penn State’s College of Information Sciences and Technology shows how these fake news detectors can be manipulated through user comments to flag true news as false and false news as true. This attack approach could give adversaries the ability to influence the detector’s assessment of the story even if they are not the story’s original author.

“Our model does not require the adversaries to modify the target article’s title or content,” explained Thai Le, lead author of the paper and doctoral student in the College of IST. “Instead, adversaries can easily use random accounts on social media to post malicious comments to either demote a real story as fake news or promote a fake story as real news.”

That is, instead of fooling the detector by attacking the story’s content or source, commenters can attack the detector itself.

The researchers developed a framework – called Malcom – to generate, optimize, and add malicious comments that were readable and relevant to the article in an effort to fool the detector.

Then, they assessed the quality of the artificially generated comments by seeing if humans could differentiate them from those generated by real users. Finally, they tested Malcom’s performance on several popular fake news detectors.

Malcom performed better than the baseline for existing models by fooling five of the leading neural network based fake news detectors more than 93% of the time. To the researchers’ knowledge, this is the first model to attack fake news detectors using this method.

The benefits

This approach could be appealing to attackers because they do not need to follow traditional steps of spreading fake news, which primarily involves owning the content.

The researchers hope their work will help those charged with creating fake news detectors to develop more robust models and strengthen methods to detect and filter-out malicious comments, ultimately helping readers get accurate information to make informed decisions.

“Fake news has been promoted with deliberate intention to widen political divides, to undermine citizens’ confidence in public figures, and even to create confusion and doubts among communities,” the team wrote in their paper.

Added Le, “Our research illustrates that attackers can exploit this dependency on users’ engagement to fool the detection models by posting malicious comments on online articles, and it highlights the importance of having robust fake news detection models that can defend against adversarial attacks.”

Researchers open the door to new distribution methods for secret cryptographic keys

Researchers from the University of Ottawa, in collaboration with Ben-Gurion University of the Negev and Bar-Ilan University scientists, have been able to create optical framed knots in the laboratory that could potentially be applied in modern technologies.

framed knots

Top view of the framed knots generated in this work

Their work opens the door to new methods of distributing secret cryptographic keys – used to encrypt and decrypt data, ensure secure communication and protect private information.

“This is fundamentally important, in particular from a topology-focused perspective, since framed knots provide a platform for topological quantum computations,” explained senior author, Professor Ebrahim Karimi, Canada Research Chair in Structured Light at the University of Ottawa.

“In addition, we used these non-trivial optical structures as information carriers and developed a security protocol for classical communication where information is encoded within these framed knots.”

The concept of framed knots

The researchers suggest a simple do-it-yourself lesson to help us better understand framed knots, those three-dimensional objects that can also be described as a surface.

“Take a narrow strip of a paper and try to make a knot,” said first author Hugo Larocque, uOttawa alumnus and current PhD student at MIT.

“The resulting object is referred to as a framed knot and has very interesting and important mathematical features.”

The group tried to achieve the same result but within an optical beam, which presents a higher level of difficulty. After a few tries (and knots that looked more like knotted strings), the group came up with what they were looking for: a knotted ribbon structure that is quintessential to framed knots.

“In order to add this ribbon, our group relied on beam-shaping techniques manipulating the vectorial nature of light,” explained Hugo Larocque. “By modifying the oscillation direction of the light field along an “unframed” optical knot, we were able to assign a frame to the latter by “gluing” together the lines traced out by these oscillating fields.”

According to the researchers, structured light beams are being widely exploited for encoding and distributing information.

“So far, these applications have been limited to physical quantities which can be recognized by observing the beam at a given position,” said uOttawa Postdoctoral Fellow and co-author of this study, Dr. Alessio D’Errico.

“Our work shows that the number of twists in the ribbon orientation in conjunction with prime number factorization can be used to extract a so-called “braid representation” of the knot.”

“The structural features of these objects can be used to specify quantum information processing programs,” added Hugo Larocque. “In a situation where this program would want to be kept secret while disseminating it between various parties, one would need a means of encrypting this “braid” and later deciphering it.

“Our work addresses this issue by proposing to use our optical framed knot as an encryption object for these programs which can later be recovered by the braid extraction method that we also introduced.”

“For the first time, these complicated 3D structures have been exploited to develop new methods for the distribution of secret cryptographic keys. Moreover, there is a wide and strong interest in exploiting topological concepts in quantum computation, communication and dissipation-free electronics. Knots are described by specific topological properties too, which were not considered so far for cryptographic protocols.”

The applications

“Current technologies give us the possibility to manipulate, with high accuracy, the different features characterizing a light beam, such as intensity, phase, wavelength and polarization,” said Larocque.

“This allows to encode and decode information with all-optical methods. Quantum and classical cryptographic protocols have been devised exploiting these different degrees of freedom.”

“Our work opens the way to the use of more complex topological structures hidden in the propagation of a laser beam for distributing secret cryptographic keys.”

“Moreover, the experimental and theoretical techniques we developed may help find new experimental approaches to topological quantum computation, which promises to surpass noise-related issues in current quantum computing technologies,” added Dr. Ebrahim Karimi.

Why are certain employees more likely to comply with information security policies than others?

Information security policies (ISP) that are not grounded in the realities of an employee’s work responsibilities and priorities expose organizations to higher risk for data breaches, according to a research from Binghamton University, State University of New York.

information security policies

The study’s findings, that subcultures within an organization influence whether employees violate ISP or not, have led researchers to recommend an overhaul of the design and implementation of ISP, and to work with employees to find ways to seamlessly fit ISP compliance into their day-to-day tasks.

“The frequency, scope and cost of data breaches have been increasing dramatically in recent years, and the majority of these cases happen because humans are the weakest link in the security chain. Non-compliance to ISP by employees is one of the important factors,” said Sumantra Sarkar, associate professor of management information systems in Binghamton University’s School of Management.

“We wanted to understand why certain employees were more likely to comply with information security policies than others in an organization.”

How subcultures influence compliance within healthcare orgs

Sarkar, with a research team, sought to determine how subcultures influence compliance, specifically within healthcare organizations.

“Every organization has a culture that is typically set by top management. But within that, you have subcultures among different professional groups in the organization,” said Sarkar. “Each of these groups are trained in a different way and are responsible for different tasks.”

Sarkar and his fellow researchers focused on ISP compliance within three subcultures found in a hospital setting – physicians, nurses and support staff.

The expansive study took years to complete, with one researcher embedding in a hospital for over two years to observe and analyze activities, as well as to conduct interviews and surveys with multiple employees.

Because patient data in a hospital is highly confidential, one area researchers focused on was the requirement for hospital employees to lock their electronic health record (EHR) workstation when not present.

“Physicians, who are dealing with emergency situations constantly were more likely to leave a workstation unlocked. They were more worried about the immediate care of a patient than the possible risk of a data breach,” said Sarkar.

“On the opposite end, support staff rarely kept workstations unlocked when they were away, as they felt they were more likely to be punished or fired should a data breach occur.”

The conclusion

Researchers concluded that each subculture within an organization will respond differently to the organization-wide ISP, leaving organizations open to a higher possibility of data breaches.

Their recommendation – consult with each subculture while developing ISP.

“Information security professionals should have a better understanding of the day-to-day tasks of each professional group, and then find ways to seamlessly integrate ISP compliance within those job tasks,” said Sarkar. “It is critical that we find ways to redesign ISP systems and processes in order to create less friction.”

In the context of a hospital setting, Sarkar recommends touchless, proximity-based authentication mechanisms that could lock or unlock workstations when an employee approaches or leaves a workstation.

Researchers also found that most employees understand the value of ISP compliance, and realize the potential cost of a data breach. However, Sarkar believes that outdated information security policies’ compliance measures have the potential to put employees in a conflict of priorities.

“There shouldn’t be situations where physicians are putting the entire hospital at risk for a data breach because they are dealing with a patient who needs emergency care,” he said. “We need to find ways to accommodate the responsibilities of different employees within an organization.”

MatRiCT: A quantum-safe and privacy-preserving blockchain protocol

Researchers from CSIRO’s Data61 and the Monash Blockchain Technology Centre have developed the world’s most efficient blockchain protocol that is both secure against quantum computers and protects the privacy of its users and their transactions.

MatRiCT

The technology can be applied beyond cryptocurrencies, such as digital health, banking, finance and government services, as well as services which may require accountability to prevent illegal use.

The protocol — a set of rules governing how a blockchain network operates — is called MatRiCT.

Cryptocurrencies vulnerable to attacks by quantum computers

The cryptocurrency market is currently valued at more than $325 billion, with an average of approximately $50 billion traded daily over the past year.

However, blockchain-based cryptocurrencies like Bitcoin and Ethereum are vulnerable to attacks by quantum computers, which are capable of performing complex calculations and processing substantial amounts of data to break blockchains, in significantly faster times than current computers.

“Quantum computing can compromise the signatures or keys used to authenticate transactions, as well as the integrity of blockchains themselves,” said Dr Muhammed Esgin, lead researcher at Monash University and Data61’s Distributed Systems Security Group. “Once this occurs, the underlying cryptocurrency could be altered, leading to theft, double spend or forgery, and users’ privacy may be jeopardised.

“Existing cryptocurrencies tend to either be quantum-safe or privacy-preserving, but for the first time our new protocol achieves both in a practical and deployable way.”

The MatRiCT protocol is based on hard lattice problems, which are quantum secure, and introduces three new key features: the shortest quantum-secure ring signature scheme to date, which authenticates activity and transactions using only the signature; a zero-knowledge proof method, which hides sensitive transaction information; and an auditability function, which could help prevent illegal cryptocurrency use.

Blockchain challenged by speed and energy consumption

Speed and energy consumption are significant challenges presented by blockchain technologies which can lead to inefficiencies and increased costs.

“The protocol is designed to address the inefficiencies in previous blockchain protocols such as complex authentication procedures, thereby speeding up calculation efficiencies and using less energy to resolve, leading to significant cost savings,” said Dr Ron Steinfeld, associate professor, co-author of the research and a quantum-safe cryptography expert at Monash University.

“Our new protocol is significantly faster and more efficient, as the identity signatures and proof required when conducting transactions are the shortest to date, thereby requiring less data communication, speeding up the transaction processing time, and reducing the amount of energy required to complete transactions.”

“Hcash will be incorporating the protocol into its own systems, transforming its existing cryptocurrency, HyperCash, into one that is both quantum safe and privacy protecting,” said Dr Joseph Liu, associate professor, Director of Monash Blockchain Technology Centre and HCash Chief Scientist.

Phish Scale: New method helps organizations better train their employees to avoid phishing

Researchers at the National Institute of Standards and Technology (NIST) have developed a new method called the Phish Scale that could help organizations better train their employees to avoid phishing.

Phish Scale

How does Phish Scale work?

Many organizations have phishing training programs in which employees receive fake phishing emails generated by the employees’ own organization to teach them to be vigilant and to recognize the characteristics of actual phishing emails.

CISOs, who often oversee these phishing awareness programs, then look at the click rates, or how often users click on the emails, to determine if their phishing training is working. Higher click rates are generally seen as bad because it means users failed to notice the email was a phish, while low click rates are often seen as good.

However, numbers alone don’t tell the whole story. “The Phish Scale is intended to help provide a deeper understanding of whether a particular phishing email is harder or easier for a particular target audience to detect,” said NIST researcher Michelle Steves. The tool can help explain why click rates are high or low.

The Phish Scale uses a rating system that is based on the message content in a phishing email. This can consist of cues that should tip users off about the legitimacy of the email and the premise of the scenario for the target audience, meaning whichever tactics the email uses would be effective for that audience. These groups can vary widely, including universities, business institutions, hospitals and government agencies.

The new method uses five elements that are rated on a 5-point scale that relate to the scenario’s premise. The overall score is then used by the phishing trainer to help analyze their data and rank the phishing exercise as low, medium or high difficulty.

The significance of the Phish Scale is to give CISOs a better understanding of their click-rate data instead of relying on the numbers alone. A low click rate for a particular phishing email can have several causes: the phishing training emails are too easy or do not provide relevant context to the user, or the phishing email is similar to a previous exercise. Data like this can create a false sense of security if click rates are analyzed on their own without understanding the phishing email’s difficulty.

Helping CISOs better understand their phishing training programs

By using the Phish Scale to analyze click rates and collecting feedback from users on why they clicked on certain phishing emails, CISOs can better understand their phishing training programs, especially if they are optimized for the intended target audience.

The Phish Scale is the culmination of years of research, and the data used for it comes from an “operational” setting, very much the opposite of a laboratory experiment with controlled variables.

“As soon as you put people into a laboratory setting, they know,” said Steves. “They’re outside of their regular context, their regular work setting, and their regular work responsibilities. That is artificial already. Our data did not come from there.”

This type of operational data is both beneficial and in short supply in the research field. “We were very fortunate that we were able to publish that data and contribute to the literature in that way,” said NIST researcher Kristen Greene.

As for next steps, Greene and Steves say they need even more data. All of the data used for the Phish Scale came from NIST. The next step is to expand the pool and acquire data from other organizations, including nongovernmental ones, and to make sure the Phish Scale performs as it should over time and in different operational settings.

“We know that the phishing threat landscape continues to change,” said Greene. “Does the Phish Scale hold up against all the new phishing attacks? How can we improve it with new data?” NIST researcher Shaneé Dawkins and her colleagues are now working to make those improvements and revisions.

Mobile messengers expose billions of users to privacy attacks

Popular mobile messengers expose personal data via discovery services that allow users to find contacts based on phone numbers from their address book, according to researchers.

mobile messengers privacy

When installing a mobile messenger like WhatsApp, new users can instantly start texting existing contacts based on the phone numbers stored on their device. For this to happen, users must grant the app permission to access and regularly upload their address book to company servers in a process called mobile contact discovery.

A recent study by a team of researchers from the Secure Software Systems Group at the University of Würzburg and the Cryptography and Privacy Engineering Group at TU Darmstadt shows that currently deployed contact discovery services severely threaten the privacy of billions of users.

Utilizing very few resources, the researchers were able to perform practical crawling attacks on the popular messengers WhatsApp, Signal, and Telegram. The results of the experiments demonstrate that malicious users or hackers can collect sensitive data at a large scale and without noteworthy restrictions by querying contact discovery services for random phone numbers.

Attackers are enabled to build accurate behavior models

For the extensive study, the researchers queried 10% of all US mobile phone numbers for WhatsApp and 100% for Signal. Thereby, they were able to gather personal (meta) data commonly stored in the messengers’ user profiles, including profile pictures, nicknames, status texts and the “last online” time.

The analyzed data also reveals interesting statistics about user behavior. For example, very few users change the default privacy settings, which for most messengers are not privacy-friendly at all.

The researchers found that about 50% of WhatsApp users in the US have a public profile picture and 90% a public “About” text. Interestingly, 40% of Signal users, which can be assumed to be more privacy concerned in general, are also using WhatsApp, and every other of those Signal users has a public profile picture on WhatsApp.

Tracking such data over time enables attackers to build accurate behavior models. When the data is matched across social networks and public data sources, third parties can also build detailed profiles, for example to scam users.

For Telegram, the researchers found that its contact discovery service exposes sensitive information even about owners of phone numbers who are not registered with the service.

Which information is revealed during contact discovery and can be collected via crawling attacks depends on the service provider and the privacy settings of the user. WhatsApp and Telegram, for example, transmit the user’s entire address book to their servers.

More privacy-concerned messengers like Signal transfer only short cryptographic hash values of phone numbers or rely on trusted hardware. However, the research team shows that with new and optimized attack strategies, the low entropy of phone numbers enables attackers to deduce corresponding phone numbers from cryptographic hashes within milliseconds.

Moreover, since there are no noteworthy restrictions for signing up with messaging services, any third party can create a large number of accounts to crawl the user database of a messenger for information by requesting data for random phone numbers.

“We strongly advise all users of messenger apps to revisit their privacy settings. This is currently the most effective protection against our investigated crawling attacks,” agree Prof. Alexandra Dmitrienko (University of Würzburg) and Prof. Thomas Schneider (TU Darmstadt).

Impact of research results: Service providers improve their security measures

The research team reported their findings to the respective service providers. As a result, WhatsApp has improved their protection mechanisms such that large-scale attacks can be detected, and Signal has reduced the number of possible queries to complicate crawling.

The researchers also proposed many other mitigation techniques, including a new contact discovery method that could be adopted to further reduce the efficiency of attacks without negatively impacting usability.

Popular Android apps are rife with cryptographic vulnerabilities

Columbia University researchers have released Crylogger, an open source dynamic analysis tool that shows which Android apps feature cryptographic vulnerabilities.

They also used it to test 1780 popular Android apps from the Google Play Store, and the results were abysmal:

  • All apps break at least one of the 26 crypto rules
  • 1775 apps use an unsafe pseudorandom number generator (PRNG)
  • 1,764 apps use a broken hash function (SHA1, MD2, MD5, etc.)
  • 1,076 apps use the CBC operation mode (which is vulnerable to padding oracle attacks in client-server scenarios)
  • 820 apps use a static symmetric encryption key (hardcoded)

Android apps cryptographic vulnerabilities

About Crylogger

Each of the tested apps with an instrumented crypto library were run in Crylogger, which logs the parameters that are passed to the crypto APIs during the execution and then checks their legitimacy offline by using a list of crypto rules.

Android apps cryptographic vulnerabilities

“Cryptographic (crypto) algorithms are the essential ingredients of all secure systems: crypto hash functions and encryption algorithms, for example, can guarantee properties such as integrity and confidentiality,” the researchers explained.

“A crypto misuse is an invocation to a crypto API that does not respect common security guidelines, such as those suggested by cryptographers or organizations like NIST and IETF.”

To confirm that the cryptographic vulnerabilities flagged by Crylogger can actually be exploited, the researchers manually reverse-engineered 28 of the tested apps and found that 14 of them are vulnerable to attacks (even though some issues may be considered out-of-scope by developers because they require privilege escalation for effective exploitation).

Recommended use

Comparing the results of Crylogger (a dynamic analysis tool) with those of CryptoGuard (an open source static analysis tool for detecting crypto misuses in Java-based applications) when testing 150 apps, the researchers found that the former flags some issues that the latter misses, and vice versa.

The best thing for developers would be to test their applications with both before they offer them for download, the researchers noted. Also, Crylogger can be used to check apps submitted to app stores.

“Using a dynamic tool on a large number of apps is hard, but Crylogger can refine the misuses identified with static analysis because, typically, many of them are false positives that cannot be discarded manually on such a large number of apps,” they concluded.

Worrying findings

As noted at the beginning of this piece, too many apps break too many cryptographic rules. What’s more, too many app and library developers are choosing to effectively ignore these problems.

The researchers emailed 306 developers of Android apps that violate 9 or more of the crypto rules: only 18 developers answered back, and only 8 of them continued to communicate after that first email and provided useful feedback on their findings. They also contacted 6 developers of popular Android libraries and received answers from 2 of them.

The researchers chose not to reveal the names of the vulnerable apps and libraries because they fear that information would benefit attackers, but they shared enough to show that these issues affect all types of apps: from media streaming and newspaper apps, to file and password managers, authentication apps, messaging apps, and so on.

Researchers develop secure multi-user quantum communication network

The world is one step closer to having a totally secure internet and an answer to the growing threat of cyber-attacks, thanks to a team of international scientists who have created a multi-user quantum communication network which could transform how we communicate online.

multi-user quantum communication

The invention led by the University of Bristol has the potential to serve millions of users, is understood to be the largest-ever quantum network of its kind, and could be used to secure people’s online communication, particularly in these internet-led times accelerated by the COVID-19 pandemic.

By deploying a new technique, harnessing the simple laws of physics, it can make messages completely safe from interception while also overcoming major challenges which have previously limited advances in this little used but much-hyped technology.

Lead author Dr Siddarth Joshi, who headed the project at the university’s Quantum Engineering Technology (QET) Labs, said: “This represents a massive breakthrough and makes the quantum internet a much more realistic proposition. Until now, building a quantum network has entailed huge cost, time, and resource, as well as often compromising on its security which defeats the whole purpose.”

“Our solution is scalable, relatively cheap and, most important of all, impregnable. That means it’s an exciting game changer and paves the way for much more rapid development and widespread rollout of this technology.”

Protecting the future internet

The current internet relies on complex codes to protect information, but hackers are increasingly adept at outsmarting such systems leading to cyber-attacks across the world which cause major privacy breaches and fraud running into trillions of pounds annually. With such costs projected to rise dramatically, the case for finding an alternative is even more compelling and quantum has for decades been hailed as the revolutionary replacement to standard encryption techniques.

So far physicists have developed a form of secure encryption, known as quantum key distribution, in which particles of light, called photons, are transmitted. The process allows two parties to share, without risk of interception, a secret key used to encrypt and decrypt information. But to date this technique has only been effective between two users.

“Until now efforts to expand the network have involved vast infrastructure and a system which requires the creation of another transmitter and receiver for every additional user. Sharing messages in this way, known as trusted nodes, is just not good enough because it uses so much extra hardware which could leak and would no longer be totally secure,” Dr Joshi said.

How the multi-user quantum communication network works

The team’s quantum technique applies a seemingly magical principle, called entanglement, which Albert Einstein described as “spooky action at a distance.” It exploits the power of two different particles placed in separate locations, potentially thousands of miles apart, to simultaneously mimic each other. This process presents far greater opportunities for quantum computers, sensors, and information processing.

“Instead of having to replicate the whole communication system, this latest methodology, called multiplexing, splits the light particles, emitted by a single system, so they can be received by multiple users efficiently,” Dr Joshi said.

The team created a network for eight users using just eight receiver boxes, whereas the former method would need the number of users multiplied many times – in this case, amounting to 56 boxes. As the user numbers grow, the logistics become increasingly unviable – for instance 100 users would take 9,900 receiver boxes.

To demonstrate its functionality across distance, the receiver boxes were connected to optical fibres via different locations across Bristol and the ability to transmit messages via quantum communication was tested using the city’s existing optical fibre network.

“Besides being completely secure, the beauty of this new technique is its streamline agility, which requires minimal hardware because it integrates with existing technology,” Dr Joshi said.

The team’s unique system also features traffic management, delivering better network control which allows, for instance, certain users to be prioritised with a faster connection.

Saving time and money

Whereas previous quantum systems have taken years to build, at a cost of millions or even billions of pounds, this network was created within months for less than £300,000. The financial advantages grow as the network expands, so while 100 users on previous quantum systems might cost in the region of £5 billion, Dr Joshi believes multiplexing technology could slash that to around £4.5 million, less than 1 per cent.

In recent years quantum cryptography has been successfully used to protect transactions between banking centres in China and secure votes at a Swiss election. Yet its wider application has been held back by the sheer scale of resources and costs involved.

“With these economies of scale, the prospect of a quantum internet for universal usage is much less far-fetched. We have proved the concept and by further refining our multiplexing methods to optimise and share resources in the network, we could be looking at serving not just hundreds or thousands, but potentially millions of users in the not too distant future,” Dr Joshi said.

“The ramifications of the COVID-19 pandemic have not only shown importance and potential of the internet, and our growing dependence on it, but also how its absolute security is paramount. Multiplexing entanglement could hold the vital key to making this security a much-needed reality.”

Collaborating institutions with the University of Bristol are the University of Leeds, Croatia’s Ruder Boskovic Institute (RBI) in Zagreb, Austria’s Institute for Quantum Optics and Quantum Information (IQOQI), in Vienna, and China’s National University of Defence Technology (NUDT) in Changsha.

Microsoft builds deepfakes detection tool to combat election disinformation

Microsoft has developed a deepfakes detection tool to help news publishers and political campaigns, as well as technology to help content creators “mark” their images and videos in a way that will show if the content has been manipulated post-creation. The deepfakes problem Deepfakes – photos and videos in which a person is replaced with someone else’s likeness through the power of artificial intelligence (AI) – are already having an impact individuals’ lives, politics and … More

The post Microsoft builds deepfakes detection tool to combat election disinformation appeared first on Help Net Security.

A new project enables data to be read directly from compressed IoT data

The Network Computing, Communications and Storage research group at Aarhus University has developed a completely new way to compress data. The new technique provides possibility to analyze data directly on compressed files, and it may have a major impact on the so-called “data tsunami” from massive amounts of IoT devices.

compressed IoT data

The method will now be further developed, and it will form the framework for an end-to-end solution to help scale-down the exponentially increasing volumes of data from IoT devices.

“Today, if you need just 1 Byte of data from a 100 MB compressed file, you usually have to decompress a significant part of the whole file to access to the data. Our technology enables random access to the compressed data. It means that you can access 1 Byte data at the cost of only decompressing less than 100 Bytes, which is several orders of magnitude lower compared to the state-of-the-art technologies. This could have a huge impact on data accessibility, data processing speed and the cloud storage infrastructure,” says Associate Professor Qi Zhang from Aarhus University.

Compressed IoT data

The compression technique makes it feasible to compress IoT data (typically data in time series) in real time before the data is sent to the cloud. After this, the typical data analytics could be carried out directly on the compressed data. There is no need to decompress all the data or large amounts of it in order to carry out an analysis.

This could potentially alleviate the ever-increasing pressure on the communication and data storage infrastructure. The research group believes that the project’s results will serve as a foundation for the development of sustainable IoT solutions, and that it could have a profound impact on digitalization:

“Today, IoT data is constantly being streamed to the cloud, and as consequence of the massive amounts of IoT devices deployed globally an exponential data growth is expected. Conventionally, to allow fast frequent data retrieval and analysis, it is preferable to store the data in an uncompressed form.

“The drawback here is the use of more storage space. If you keep the data in compressed form; however, it takes time to decompress the data first before you can access and analyze it. Our project outcome has the potential not only to reduce data storage space but also to accelerate data analysis,” says Qi Zhang.

People spend a little less time looking at fake news headlines than factual ones

The term fake news has been a part of our vocabulary since the 2016 US presidential election. As the amount of fake news in circulation grows larger and larger, particularly in the United States, it often spreads like wildfire. Subsequently, there is an ever-increasing need for fact-checking and other solutions to help people navigate the oceans of factual and fake news that surround us.

fake news headlines

Help may be on the way, via an interdisciplinary field where eye-tracking technology and computer science meet. A study by University of Copenhagen and Aalborg University researchers shows that people’s eyes react differently to factual and false news headlines.

Eyes spend a bit less time on fake news headlines

Researchers placed 55 different test subjects in front of a screen to read 108 news headlines. A third of the headlines were fake. The test subjects were assigned a so-called ‘pseudo-task’ of assessing which of the news items was the most recent. What they didn’t know, was that some of the headlines were fake. Using eye-tracking technology, the researchers analyzed how much time each person spent reading the headlines and how many fixations the person per headline.

“We thought that it would be interesting to see if there’s a difference in the way people read news headlines, depending on whether the headlines are factual or false. This has never been studied. And, it turns out that there is indeed a statistically significant difference,” says PhD fellow and lead author Christian Hansen, of the University of Copenhagen’s Department of Computer Science.

His colleague from the same department, PhD fellow Casper Hansen, adds: “The study demonstrated that our test subjects’ eyes spent less time on false headlines and fixated on them a bit less compared with the headlines that were true. All in all, people gave fake news headlines a little less visual attention, despite their being unaware that the headlines were fake.”

The computer scientists can’t explain for the difference, nor do they dare make any guesses. Nevertheless, they were surprised by the result.

The researchers used the results to create an algorithm that can predict whether a news headline is fake based on eye movements.

Could support fact-checking

As a next step, the researchers would like to examine whether it is possible to measure the same differences in eye movements on a larger scale, beyond the lab – preferably using ordinary webcams or mobile phone cameras. It will, of course, require that people allow for access to their cameras.

The two computer scientists imagine that eye-tracking technology could eventually help with the fact-checking of news stories, all depending upon their ability to collect data from people’s reading patterns. The data could come from news aggregator website users or from the users of other sources, e.g., Feedly and Google News, as well as from social media, like Facebook and Twitter, where the amount of fake news is large as well.

“Professional fact-checkers in the media and organizations need to read through lots of material just to find out what needs to be fact-checked. A tool to help them prioritize material could be of great help,” concludes Christian Hansen.

Confirmed: Browsing histories can be used to track users

Browsing histories can be used to compile unique browsing profiles, which can be used to track users, Mozilla researchers have confirmed.

Browser histories track users

There are also many third parties pervasive enough to gather web histories sufficient to leverage browsing history as an identifier.

The research

This is not the first time that researchers have demonstrated that browsing profiles are distinctive and stable enough to be used as identifiers.

Sarah Bird, Ilana Segall and Martin Lopatka were spurred to reproduce the results set forth in a 2012 paper by Lukasz Olejnik, Claude Castelluccia, and Artur Janc, by using more refined data, and they’ve extend that work to detail the privacy risk posed by the aggregation of browsing histories.

The Mozillians collected browsing data from ~52,000 Firefox for 7 calendar days, then paused for 7 days, and then resumed for an additional 7 days. After analyzing the collected data, they identified 48,919 distinct browsing profiles, of which 99% are unique. (The original paper observed a set of ~400,000 web history profiles, of which 94% were unique.)

“High uniqueness holds even when histories are truncated to just 100 top sites. We then find that for users who visited 50 or more distinct domains in the two-week data collection period, ~50% can be reidentified using the top 10k sites. Reidentifiability rose to over 80% for users that browsed 150 or more distinct domains,” they noted.

The also confirmed that browsing history profiles are stable through time – a second prerequisite for these profiles being repeatedly tied to specific users/consumers and used for online tracking.

“Our reidentifiability rates in a pool of 1,766 were below 10% for 100 sites despite a >90% profile uniqueness across datasets, but increased to ~80% when we consider 10,000 sites,” they added.

Finally, some corporate entities like Alphabet (Google) and Facebook are able to observe the web to an even greater extent that when the research for the 2012 paper was conducted, which may allow them to gain deep visibility into browsing activity and use that visibility for effective online tracking – even if users use different devices to browse the internet.

Browser histories track users

Other recent research has shown that anonymization of browsing patterns/profile through generalization does not sufficiently protect users’ anonymity.

Regulation is needed

Privacy researcher Lukasz Olejnik, one of the authors of the 2012 paper, noted that the findings of this newest research are a welcome confirmation that web browsing histories are personal data that can reveal insight about the user or be used to track users.

“In some ways, browsing history resemble biometric-like data due to their uniqueness and stability,” he commented, and pointed out that, since this data allows the singling-out of individuals out of many, it automatically comes under the General Data Protection Regulation (GDPR).

“Web browsing histories are private data, and in certain contexts, they are personal data. Now the state of the art in research indicates this. Technology should follow. So too should the regulations and standards in the data processing. As well as enforcement,” he concluded.

Researchers develop AI technique to protect medical devices from anomalous instructions

Researchers at Ben-Gurion University of the Negev have developed a new AI technique that will protect medical devices from malicious operating instructions in a cyberattack as well as other human and system errors.

AI protect medical devices

Complex medical devices such as CT (computed tomography), MRI (magnetic resonance imaging) and ultrasound machines are controlled by instructions sent from a host PC.

Abnormal or anomalous instructions introduce many potentially harmful threats to patients, such as radiation overexposure, manipulation of device components or functional manipulation of medical images. Threats can occur due to cyberattacks, human errors such as a technician’s configuration mistake or host PC software bugs.

Dual-layer architecture: AI technique to protect medical devices

As part of his Ph.D. research, BGU researcher Tom Mahler has developed a technique using artificial intelligence that analyzes the instructions sent from the PC to the physical components using a new architecture for the detection of anomalous instructions.

“We developed a dual-layer architecture for the protection of medical devices from anomalous instructions,” Mahler says.

“The architecture focuses on detecting two types of anomalous instructions: (1) context-free (CF) anomalous instructions which are unlikely values or instructions such as giving 100x more radiation than typical, and (2) context-sensitive (CS) anomalous instructions, which are normal values or combinations of values, of instruction parameters, but are considered anomalous relative to a particular context, such as mismatching the intended scan type, or mismatching the patient’s age, weight, or potential diagnosis.

“For example, a normal instruction intended for an adult might be dangerous [anomalous] if applied to an infant. Such instructions may be misclassified when using only the first, CF, layer; however, by adding the second, CS, layer, they can now be detected.”

Improving anomaly detection performance

The research team evaluated the new architecture in the CT domain, using 8,277 recorded CT instructions and evaluated the CF layer using 14 different unsupervised anomaly detection algorithms. Then they evaluated the CS layer for four different types of clinical objective contexts, using five supervised classification algorithms for each context.

Adding the second CS layer to the architecture improved the overall anomaly detection performance from an F1 score of 71.6%, using only the CF layer, to between 82% and 99%, depending on the clinical objective or the body part.

Furthermore, the CS layer enables the detection of CS anomalies, using the semantics of the device’s procedure, an anomaly type that cannot be detected using only the CF layer.