Researchers from the University of Ottawa, in collaboration with Ben-Gurion University of the Negev and Bar-Ilan University scientists, have been able to create optical framed knots in the laboratory that could potentially be applied in modern technologies.
Top view of the framed knots generated in this work
Their work opens the door to new methods of distributing secret cryptographic keys – used to encrypt and decrypt data, ensure secure communication and protect private information.
“This is fundamentally important, in particular from a topology-focused perspective, since framed knots provide a platform for topological quantum computations,” explained senior author, Professor Ebrahim Karimi, Canada Research Chair in Structured Light at the University of Ottawa.
“In addition, we used these non-trivial optical structures as information carriers and developed a security protocol for classical communication where information is encoded within these framed knots.”
The concept of framed knots
The researchers suggest a simple do-it-yourself lesson to help us better understand framed knots, those three-dimensional objects that can also be described as a surface.
“Take a narrow strip of a paper and try to make a knot,” said first author Hugo Larocque, uOttawa alumnus and current PhD student at MIT.
“The resulting object is referred to as a framed knot and has very interesting and important mathematical features.”
The group tried to achieve the same result but within an optical beam, which presents a higher level of difficulty. After a few tries (and knots that looked more like knotted strings), the group came up with what they were looking for: a knotted ribbon structure that is quintessential to framed knots.
“In order to add this ribbon, our group relied on beam-shaping techniques manipulating the vectorial nature of light,” explained Hugo Larocque. “By modifying the oscillation direction of the light field along an “unframed” optical knot, we were able to assign a frame to the latter by “gluing” together the lines traced out by these oscillating fields.”
According to the researchers, structured light beams are being widely exploited for encoding and distributing information.
“So far, these applications have been limited to physical quantities which can be recognized by observing the beam at a given position,” said uOttawa Postdoctoral Fellow and co-author of this study, Dr. Alessio D’Errico.
“Our work shows that the number of twists in the ribbon orientation in conjunction with prime number factorization can be used to extract a so-called “braid representation” of the knot.”
“The structural features of these objects can be used to specify quantum information processing programs,” added Hugo Larocque. “In a situation where this program would want to be kept secret while disseminating it between various parties, one would need a means of encrypting this “braid” and later deciphering it.
“Our work addresses this issue by proposing to use our optical framed knot as an encryption object for these programs which can later be recovered by the braid extraction method that we also introduced.”
“For the first time, these complicated 3D structures have been exploited to develop new methods for the distribution of secret cryptographic keys. Moreover, there is a wide and strong interest in exploiting topological concepts in quantum computation, communication and dissipation-free electronics. Knots are described by specific topological properties too, which were not considered so far for cryptographic protocols.”
“Current technologies give us the possibility to manipulate, with high accuracy, the different features characterizing a light beam, such as intensity, phase, wavelength and polarization,” said Larocque.
“This allows to encode and decode information with all-optical methods. Quantum and classical cryptographic protocols have been devised exploiting these different degrees of freedom.”
“Our work opens the way to the use of more complex topological structures hidden in the propagation of a laser beam for distributing secret cryptographic keys.”
“Moreover, the experimental and theoretical techniques we developed may help find new experimental approaches to topological quantum computation, which promises to surpass noise-related issues in current quantum computing technologies,” added Dr. Ebrahim Karimi.
Researchers at Ben-Gurion University of the Negev have developed a new AI technique that will protect medical devices from malicious operating instructions in a cyberattack as well as other human and system errors.
Abnormal or anomalous instructions introduce many potentially harmful threats to patients, such as radiation overexposure, manipulation of device components or functional manipulation of medical images. Threats can occur due to cyberattacks, human errors such as a technician’s configuration mistake or host PC software bugs.
Dual-layer architecture: AI technique to protect medical devices
As part of his Ph.D. research, BGU researcher Tom Mahler has developed a technique using artificial intelligence that analyzes the instructions sent from the PC to the physical components using a new architecture for the detection of anomalous instructions.
“We developed a dual-layer architecture for the protection of medical devices from anomalous instructions,” Mahler says.
“The architecture focuses on detecting two types of anomalous instructions: (1) context-free (CF) anomalous instructions which are unlikely values or instructions such as giving 100x more radiation than typical, and (2) context-sensitive (CS) anomalous instructions, which are normal values or combinations of values, of instruction parameters, but are considered anomalous relative to a particular context, such as mismatching the intended scan type, or mismatching the patient’s age, weight, or potential diagnosis.
“For example, a normal instruction intended for an adult might be dangerous [anomalous] if applied to an infant. Such instructions may be misclassified when using only the first, CF, layer; however, by adding the second, CS, layer, they can now be detected.”
Improving anomaly detection performance
The research team evaluated the new architecture in the CT domain, using 8,277 recorded CT instructions and evaluated the CF layer using 14 different unsupervised anomaly detection algorithms. Then they evaluated the CS layer for four different types of clinical objective contexts, using five supervised classification algorithms for each context.
Adding the second CS layer to the architecture improved the overall anomaly detection performance from an F1 score of 71.6%, using only the CF layer, to between 82% and 99%, depending on the clinical objective or the body part.
Furthermore, the CS layer enables the detection of CS anomalies, using the semantics of the device’s procedure, an anomaly type that cannot be detected using only the CF layer.
Instead of relying on customers to protect their vulnerable smart home devices from being used in cyberattacks, Ben-Gurion University of the Negev (BGU) and National University of Singapore (NUS) researchers have developed a new method that enables telecommunications and internet service providers to monitor these devices.
An overview of the key steps in the proposed method
According to their new study, the ability to launch massive DDoS attacks via a botnet of compromised devices is an exponentially growing risk in the Internet of Things (IoT). Such attacks, possibly emerging from IoT devices in home networks, impact the attack target, as well as the infrastructure of telcos.
“Most home users don’t have the awareness, knowledge, or means to prevent or handle ongoing attacks,” says Yair Meidan, a Ph.D. candidate at BGU. “As a result, the burden falls on the telcos to handle. Our method addresses a challenging real-world problem that has already caused challenging attacks in Germany and Singapore, and poses a risk to telco infrastructure and their customers worldwide.”
Each connected device has a unique IP address. However, home networks typically use gateway routers with NAT functionality, which replaces the local source IP address of each outbound data packet with the household router’s public IP address. Consequently, detecting connected IoT devices from outside the home network is a challenging task.
The researchers developed a method to detect connected, vulnerable IoT models before they are compromised by monitoring the data traffic from each smart home device. This enables telcos to verify whether specific IoT models, known to be vulnerable to exploitation by malware for cyberattacks are connected to the home network. It helps telcos identify potential threats to their networks and take preventive actions quickly.
By using the proposed method, a telco can detect vulnerable IoT devices connected behind a NAT, and use this information to take action. In the case of a potential DDoS attack, this method would enable the telco to take steps to spare the company and its customers harm in advance, such as offloading the large volume of traffic generated by an abundance of infected domestic IoT devices. In turn, this could prevent the combined traffic surge from hitting the telco’s infrastructure, reduce the likelihood of service disruption, and ensure continued service availability.
“Unlike some past studies that evaluated their methods using partial, questionable, or completely unlabeled datasets, or just one type of device, our data is versatile and explicitly labeled with the device model,” Meidan says. “We are sharing our experimental data with the scientific community as a novel benchmark to promote future reproducible research in this domain.” This dataset is available here.
This research is a first step toward dramatically mitigating the risk posed to telcos’ infrastructure by domestic NAT IoT devices. In the future, the researchers seek to further validate the scalability of the method, using additional IoT devices that represent an even broader range of IoT models, types and manufacturers.
“Although our method is designed to detect vulnerable IoT devices before they are exploited, we plan to evaluate the resilience of our method to adversarial attacks in future research,” Meidan says. “Similarly, a spoofing attack, in which an infected device performs many dummy requests to IP addresses and ports that are different from the default ones, could result in missed detection.”
Video conference users should not post screen images of Zoom and other video conference sessions on social media, according to Ben-Gurion University of the Negev researchers, who easily identified people from public screenshots of video meetings on Zoom, Microsoft Teams and Google Meet.
Zoom image collage with detected information, along with extracted features of gender, age, face, and username
With the worldwide pandemic, millions of people of all ages have replaced face-to-face contact with video conferencing platforms to collaborate, educate and celebrate with co-workers, family and friends. In April 2020, nearly 500 million people were using these online systems. While there have been many privacy issues associated with video conferencing, the BGU researchers looked at what types of information they could extract from video collage images that were posted online or via social media.
“The findings in our paper indicate that it is relatively easy to collect thousands of publicly available images of video conference meetings and extract personal information about the participants, including their face images, age, gender, and full names,” says Dr. Michael Fire, BGU Department of Software and Information Systems Engineering (SISE). “This type of extracted data can vastly and easily jeopardize people’s security and privacy, affecting adults as well as young children and the elderly.”
The researchers report that is it possible to extract private information from collage images of meeting participants posted on Instagram and Twitter. They used image processing text recognition tools as well as social network analysis to explore the dataset of more than 15,700 collage images and more than 142,000 face images of meeting participants.
Artificial intelligence-based image-processing algorithms helped identify the same individual’s participation at different meetings by simply using either face recognition or other extracted user features like the image background.
The researchers were able to spot faces 80% of the time as well as detect gender and estimate age. Free web-based text recognition libraries allowed the BGU researchers to correctly determine nearly two-thirds of usernames from screenshots.
The researchers identified 1,153 people likely appeared in more than one meeting, as well as networks of Zoom users in which all the participants were coworkers. “This proves that the privacy and security of individuals and companies are at risk from data exposed on video conference meetings,” according to the research team which also includes BGU SISE researchers Dima Kagan and Dr. Galit Fuhrmann Alpert.
Cross-referencing facial image data with social network data may cause greater privacy risk as it is possible to identify a user that appears in several video conference meetings and maliciously aggregate different information sources about the targeted individual.
Data extraction process
The research team offers a number of recommendations to prevent privacy and security intrusions. These include not posting video conference images online, or sharing videos; using generic pseudonyms like “iZoom” or “iPhone” rather than a unique username or real name; and using a virtual background vs. a real background since it can help fingerprint a user account across several meetings.
Additionally, the team advises video conferencing operators to augment their platforms with a privacy mode such as filters or Gaussian noise to an image, which can disrupt facial recognition while keeping the face still recognizable.
“Since organizations are relying on video conferencing to enable their employees to work from home and conduct meetings, they need to better educate and monitor a new set of security and privacy threats,” Fire says. “Parents and children of the elderly also need to be vigilant, as video conferencing is no different than other online activity.”
Despite a previous warning by Ben-Gurion University of the Negev (BGU) researchers, who exposed vulnerabilities in 911 systems due to DDoS attacks, the next generation of 911 systems that now accommodate text, images and video still have the same or more severe issues.
In the study the researchers evaluated the impact of DDoS attacks on the current (E911) and next generation 911 (NG911) infrastructures in North Carolina. The research was conducted by Dr. Mordechai Guri, head of research and development, BGU Cyber Security Research Center (CSRC), and chief scientist at Morphisec Technologies, and Dr. Yisroel Mirsky, senior cyber security researcher and project manager at the BGU CSRC.
Implementation of NG911
In recent years, organizations have experienced countless DDoS attacks, during which internet-connected devices are flooded with traffic – often generated by many computers or phones called “bots” that are infected by malware by a hacker and act in concert with each other. When an attacker ties up all the available connections with malicious traffic, no legitimate information – like calling 911 in a real emergency – can make it through.
“In this study, we found that only 6,000 bots are sufficient to significantly compromise the availability of a state’s 911 services and only 200,000 bots can jeopardize the entire United States,” Dr. Guri explains.
When telephone customers dial 911 on their landlines or mobile phones, the telephone companies’ systems make the connection to the appropriate call center. Due to the limitations of original E911, the U.S. has been slowly transitioning the older circuit-switched 911 infrastructure to a packet-switched VoIP infrastructure, NG911.
It improves reliability by enabling load balancing between emergency call centers or public safety answering points (PSAP). It also expands 911 service capabilities, enabling the public to call over VoIP, transmit text, images, video, and data to PSAPs.
A number of states have implemented this and nearly all other states have begun planning or have some localized implementation of NG911.
Prevention of possible future DDoS attacks targeting 911
Many internet companies have taken significant steps to safeguard against this sort of online attack. For example, Google Shield is a service that protects news sites from attacks by using Google’s massive network of internet servers to filter out attacking traffic, while allowing through only legitimate connections. However, phone companies have not done the same.
To demonstrate how DDoS attacks could affect 911 call systems, the researchers created a detailed simulation of North Carolina’s 911 infrastructure, and a general simulation of the entire U.S. emergency-call system.
Using only 6,000 infected phones, it is possible to effectively block 911 calls from 20% of the state’s landline callers, and half of the mobile customers. “In our simulation, even people who called back four or five times would not be able to reach a 911 operator to get help,” Dr. Guri says.
The countermeasures that exist today are difficult and not without flaws. Many involve blocking certain devices from calling 911, which carries the risk of preventing a legitimate call for help. But they indicate areas where further inquiry – and collaboration between researchers, telecommunications companies, regulators, and emergency personnel – could yield useful breakthroughs.
For example, cellphones might be required to run a monitoring software to blacklist or block themselves from making fraudulent 911 calls. Or 911 systems could examine identifying information of incoming calls and prioritize those made from phones that are not trying to mask themselves.
“Many say that the new NG911 solves the DDoS problem because callers can be connected to PSAPs around the country, not just locally,” Dr. Mirsky explains. “Nationally, with complete resource sharing, the rate that callers give up trying — called the ‘despair rate’ — is still significant: 15% with 6,000 bots and 43% with 50,000 bots.
“But the system would still need to communicate locally to dispatch police, medical and fire services. As a result, the despair rate is more likely to be 56% with 6,000 bots –worse than using the original E911 infrastructure.”
According to Dr. Guri, “We believe that this research will assist the respective organizations, lawmakers and security professionals in understanding the scope of this issue and aid in the prevention of possible future attacks on the 911 emergency services. It is critical that 911 services always be available – to respond quickly to emergencies and give the public peace of mind.”
Researchers from Ben-Gurion University of the Negev’s (BGU) Cyber Security Research Center have found that they can trick the autopilot on an autonomous car to erroneously apply its brakes in response to “phantom” images projected on a road or billboard.
In a research paper the researchers demonstrated that autopilots and advanced driving-assistance systems (ADASs) in semi-autonomous or fully autonomous cars register depthless projections of objects (phantoms) as real objects.
They show how attackers can exploit this perceptual challenge to manipulate the vehicle and potentially harm the driver or passengers without any special expertise by using a commercial drone and inexpensive image projector.
Vehicular communication systems are lagging
While fully and semi-autonomous cars are already being deployed around the world, vehicular communication systems that connect the car with other cars, pedestrians and surrounding infrastructure are lagging.
According to the researchers, the lack of such systems creates a “validation gap,” which prevents the autonomous vehicles from validating their virtual perception with a third party, relying only on internal sensors.
In addition to causing the autopilot to apply brakes, the researchers demonstrated they can fool the ADAS into believing phantom traffic signs are real, when projected for 125 milliseconds in advertisements on digital billboards.
Flaws in object detectors
Lastly, they showed how fake lane markers projected on a road by a projector-equipped drone will guide the autopilot into the opposite lane and potentially oncoming traffic.
“This type of attack is currently not being taken into consideration by the automobile industry. These are not bugs or poor coding errors but fundamental flaws in object detectors that are not trained to distinguish between real and fake objects and use feature matching to detect visual objects,” says Ben Nassi, lead author and a Ph.D. student of Prof. Yuval Elovici in BGU‘s Department of Software and Information Systems Engineering and Cyber Security Research Center.
In reality, depthless objects projected on a road are considered real even though the depth sensors can differentiate between 2D and 3D. The BGU researchers believe that this is the result of a “better safe than sorry” policy that causes the car to consider a visual 2D object real.
The researchers are developing a neural network model that analyzes a detected object’s context, surface and reflected light, which is capable of detecting phantoms with high accuracy.