Skip to yearly menu bar Skip to main content


Poster

Adversarial Inputs for Linear Algebra Backends

Jonas Möller · Lukas Pirch · Felix Weissberg · Sebastian Baunsgaard · Thorsten Eisenhofer · Konrad Rieck

[ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Linear algebra is a cornerstone of neural network inference. The efficiency of popular frameworks, such as TensorFlow and PyTorch, critically depends on backend libraries providing highly optimized matrix multiplications and convolutions. A diverse range of these backends exists across platforms, including Intel MKL, Nvidia CUDA, and Apple Accelerate. Although these backends provide equivalent functionality, subtle variations in their implementations can lead to seemingly negligible differences during inference. In this paper, we investigate these minor discrepancies and demonstrate how they can be selectively amplified by adversaries. Specifically, we introduce Chimera examples, inputs to models that elicit conflicting predictions depending on the employed backend library. These inputs can even be constructed with integer values, creating a vulnerability exploitable from real-world input domains. We analyze the prevalence and extent of the underlying attack surface and propose corresponding defenses to mitigate this threat.

Live content is unavailable. Log in and register to view live content