News

Success for ATHENE researchers

01/07/2020

Ten scientific contributions at cybersecurity conference

ATHENE members from the contributing research institutions Fraunhofer SIT and TU Darmstadt were successfully able to place no less than ten papers at the “International Conference on Availability, Reliability and Security”, ARES for short. ARES highlights different aspects of security, focusing mainly on the crucial connection between availability, reliability and security.

The following papers were accepted at the main conference:

HIP: HSM-based Identities for Plug-and-Charge
Authors: Andreas Fuchs, Dustin Kern, Christoph Krauß, Maria Zhdanova
Abstract
Plug-and-Charge (PnC) standards such as ISO 15118 enable Electric Vehicle (EV) authentication against Charge Points (CPs) without driver intervention. Credentials are stored in the vehicle itself making methods using RFID cards obsolete. However, credentials are generated in service provider backend systems and provisioned via the Internet and not in a secure Hardware Security Module (HSM) within the vehicle. In this paper, we propose HIP, a backwards compatible protocol extension for ISO 15118 where keys are generated and stored in a Trusted Platform Module (TPM) within the vehicle. Our implementation and evaluation show that our solution is feasible and is a viable option for future editions of ISO 15118.

MP2ML: A mixed-protocol machine learning framework for private inference
Collaboration between TU Darmstadt with Intel AI, USA and Intel Labs, USA
Authors: Fabian Boemer, Rosario Cammarota, Daniel Demmler, Thomas Schneider, Hossein Yalame
Abstract
Privacy-preserving machine learning (PPML) has many applications, from medical image evaluation and anomaly detection to financial analysis. nGraph-HE (Boemer et al., Computing Frontiers'19) enables data scientists to perform private inference of deep learning (DL) models trained using popular frameworks such as TensorFlow. nGraph-HE computes linear layers using the CKKS homomorphic encryption (HE) scheme (Cheon et al., ASIACRYPT'17), and relies on a client-aided model to compute non-polynomial activation functions, such as MaxPool and ReLU, where intermediate feature maps are sent to the data owner to compute activation functions in the clear. This approach leaks the feature maps, from which it may be possible to deduce the DL model weights. As a result, the client-aided model may not be suitable for deployment when the DL model is intellectual property.
In this work, we present MP2ML, a machine learning framework which integrates nGraph-HE and the secure two-party computation framework ABY (Demmler et al., NDSS'15), to overcome the limitations of the client-aided model. We introduce a novel scheme for the conversion between CKKS and secure multi-party computation (MPC) to execute DL inference while maintaining the privacy of both the input data and model weights. MP2ML is compatible with popular DL frameworks such as TensorFlow that can infer pre-trained neural networks with native ReLU activations. We benchmark MP2ML on the CryptoNets network with ReLU activations, on which it achieves a throughput of 33.3 images/s and an accuracy of 98.6%. This throughput matches the previous state-of-the-art for hybrid HE-MPC networks from GAZELLE (Juvekar et al., USENIX'18), even though our protocol is more accurate and scalable than GAZELLE.

Subverting Linux’ Integrity Measurement Architecture
Authors: Felix Bohling, Michael Eckel, Jens Lindemann, Tobias Mueller
Abstract
Integrity is a key protection objective in the context of system security. This holds for both hardware and software. Since hardware cannot be changed after its manufacturing process, the manufacturer must be trusted to build it properly. However, it is completely different with software. Users of a computer system are free to run arbitrary software on it and even modify BIOS/UEFI, bootloader, or Operating System (OS).
Ensuring that only authentic software is loaded on a machine requires additional measures to be in place. Trusted Computing technology can be employed to protect the integrity of system software by leveraging a Trusted Platform Module (TPM). Measured Boot uses the TPM to record measurements of all boot software in a tamper-resistant manner. Remote attestation then allows a third party to investigate these TPM-protected measurements at a later point and verify whether only authentic software was loaded.
Measured Boot ends with loading and running the OS kernel. The Linux Integrity Measurement Architecture (IMA) extends the principle of Measured Boot into the OS, recording all software executions and files read into the TPM. Hence, IMA constitutes an essential part of the Trusted Computing Base (TCB).
In this paper, we demonstrate that the security guarantees of IMA can be undermined by means of a malicious block device. We validate the viability of the attack with an implementation of a specially-crafted malicious block device in QEMU, which delivers different data depending on whether the block has already been accessed. We analyse and discuss how the attack affects certain use cases of IMA and discuss potential mitigations.

The following papers were accepted for ARES 2020 workshops:

Mixed code text analysis for the detection of online hidden propaganda
Authors: Andrea Tundis, Gaurav Mukherjee, Max Mühlhäuser
4th International Workshop on Criminal Use of Information Hiding (CUING 2020)
Abstract
Internet based communication systems have become an increasing tool for spreading misinformation and propaganda. Though mechanisms adept in tracking unwarranted information and messages exist, users have devised different methods  to avoid scrutiny and detection. One of such method is the use of mixed code language.  Mixed code is text written in an unconventional form combining different languages, symbols, scripts and shapes,  with the aim to make it difficult to detect due to its custom approach and its ever changing aspects.
Utilizing special characters to substitute for alphabets, which makes it readable to humans but nonsensical to machine.  The intuition is that a substituted alphabet should resemble the shape of the intended alphabet. In this context, the paper explores the possibility of identifying such mixed code texts with special characters  by proposing an approach to normalize them and determine if it contains propaganda elements.  As a consequence, a tailored algorithm in combination with a deep learning models for character  selection is defined and presented. The results gathered from its experimentation are discussed and the achieved performances are compared with the related works.

DICE harder - A Hardware Implementation of the Device Identifier Composition Engine
Authors: Lukas Jäger, Richard Petri
3rd International Workshop on Security Engineering for Cloud Computing (IWSECC 2020)
Abstract
The specification of the Device Identifier Composition Engine (DICE) has been established as a minimal solution for Trusted Com- puting on microcontrollers. It allows for a wide range of possible implementations. Currently, most implementations use hardware that was not specifically designed for this purpose. These imple- mentations are reliant on black box MPUs and the implementation process has certain pitfalls due to the use of hardware that was not originally designed for the use in DICE.
We propose a DICE architecture that is based on a microcon- troller equipped with hardware tailored to DICE’s requirements.
Since DICE is intended to be a minimal solution for Trusted Com- puting, the architecture is designed to add as little overhead to a microcontroller as possible. It consists of minor modifications to the CPU’s processor pipeline, dedicated blocks of memory and modified interrupt and debug modules which makes it easy to implement. A prototype built on the VexRiscV platform, an open implementation of the RISC-V instruction set architecture, is created. It is synthe- sized for an FPGA and the increase in chip size and the impact on runtime due to the DICE extensions are evaluated. The goal is to demonstrate that with minimal changes to a microcontroller’s design a DICE can be implemented and used as a secure Root of Trust in environments such as IoT, Industrial and Automotive.

Critical Traffic Analysis on the Tor Network
Authors: Florian Platzer, Marcel Schäfer, Martin Steinebach
4th International Workshop on Criminal Use of Information Hiding (CUING 2020)
Abstract
Tor is a widely-used anonymity network with more than two million daily users. A special feature of Tor is the hidden service architecture.
Hidden services are a popular method for anonymous communication or sharing web contents anonymously. A specialty in Tor is that all data packets that are sent are structured completely identical for security reasons. They are encrypted using the TLS protocol and have a fixed size of exactly 512 bytes. In an earlier implementation, Tor is an example of networks without generated traffic noise to make traffic analysis more difficult. We identify a method to deanonymize any hidden service on Tor based on traffic analysis, which is a threat to anonymity online. Using our technique an attacker with modest resources could deanonymize any hidden services in less than 12.5 days.

Detecting Double Compression and Splicing using Benfords First Digit Law
Authors: Raphael Antonius Frick, Huajian Liu, Martin Steinebach
13th International Workshop on Digital Forensics (WSDF 2020)
Abstract
Detecting image forgeries in JPEG encoded images has been a research topic in the field of media forensics for a long time. Until today, it still holds a high importance as tools to create convincing manipulations of images have become more and more accessible to the public, which in return might be used to e.g. generate fake news. In this paper, a passive forensic detection framework to detect image manipulations is proposed based on compression artefacts and Benfords First Digit Law. It incorporates a supervised approach to reconstruct the compression history as well as provides an unsupervised detection approach to detect double compression for unknown quantization tables. The implemented algorithms were able to achieve a high AUC values when classifying high quality images exceeding similar state-of-the-art methods.

Privacy-Enhanced Robust Image Hashing with Bloom Filters
Authors: Uwe Breidenbach, Huajian Liu, Martin Steinebach
9th International Workshop on Cyber Crime (IWCC 2020)
Abstract
Robust image hashes are used to detect known illegal images, even after image processing. This is, for example, interesting for a forensic investigation, or for a company to protect their employees and customers by filtering content. The disadvantage of robust hashes is that they leak structural information of the pictures, which can lead to privacy issues. Our scientific contribution is to extend a robust image hash with privacy protection. We thus introduce and discuss such a privacy-preserving concept. The approach uses a probabilistic data structure – known as Bloom filter – to store robust image hashes. Bloom filter store elements by mapping hashes of each element to an internal data structure. We choose a cryptographic hash function to one-way encrypt and store elements.
The privacy of the inserted elements is thus protected. We evaluate our implementation, and compare it to its underlying robust image hashing algorithm. Thereby, we show the cost with respect to error rates for introducing a privacy protection into robust hashing. Finally, we discuss our approach’s results and usability, and suggest possible future improvements.

Non-Blind Steganalysis
Authors: Niklas Bunzel, Huajian Liu, Martin Steinebach
4th International Workshop on Criminal Use of Information Hiding (CUING 2020)
Abstract
The increasing digitization offers new ways, possibilities and needs for a secure transmission of information. Steganography and its analysis constitute an essential part of IT-Security. In this work we show, how methods of blind-steganalysis can be improved, to work in non-blind scenarios. The main objective was to examine, how to take advantage of the knowledge of reference images, to maximize he accuracy-rate of the analysis. Therefore we evaluated common stego-tools and their embedding algorithms and established a dataset of 353110 images. The images have been applied to test the potency of the improved methods of the non-blind steganalysis. The results show, that the accuray can be significantly improved by using cover-images to produce reference images. Also the aggregation of the outcomes has shown to have an positive impact on the accuracy. Particularly noteworthy is the correlation between the qualities of the stego- and coverimages. Only by consindering both, the accuracy could strongly be improved. Interestingly the difference between both qualities also has a deep impact on the results.

TAVeer - An Interpretable Topic-Agnostic Authorship Verification Method
Authors: Oren Halvani, Lukas Graner, Roey Regev
13th International Workshop on Digital Forensics (WSDF 2020)
Abstract
A central problem that has been researched for many years in the field of digital text forensics is the question whether two documents were written by the same author. Authorship verification (AV) is a research branch in this field that deals with this question. Over the years, research activities in the context of AV have steadily increased, which has led to a variety of approaches trying to solve this problem. Many of these approaches, however, make use of features that are related to or influenced by the topic of the documents. Therefore, it may accidentally happen that their verification results are based not on the writing style (the actual focus of AV), but on the topic of the documents. To address this problem, we propose an alternative AV approach that considers only topic-agnostic features in its classification decision. In addition, we present a post-hoc interpretation method that allows to understand which particular features have contributed to the prediction of the proposed AV method. To evaluate the performance of our AV method, we compared it with eight competing baselines (including the current state of the art) on four challenging data sets. The results show that our approach outperforms all baselines in two cases (with a maximum accuracy of 84%), while in the other two cases it performs as well as the strongest baseline.

ARES 2020 will be taking place from 25 - 28 August 2020 as a digital event.

show all news