[LINUX] The Kernel Address Sanitizer (KASAN) (2/2)

Originally, it is a part of the Linux Kernel source code, so it will be treated as GPLv2 (recognition that it should be).

https://www.kernel.org/doc/html/latest/index.html

Licensing documentation

The following describes the license of the Linux kernel source code (GPLv2), how to properly mark the license of individual files in the source tree, as well as links to the full license text.

https://www.kernel.org/doc/html/latest/process/license-rules.html#kernel-licensing

https://www.kernel.org/doc/html/latest/dev-tools/kasan.html


Docs » Development tools for the kernel » The Kernel Address Sanitizer (KASAN)

The Kernel Address Sanitizer (KASAN)

Implementation details

Generic KASAN

From a high level, our approach to memory error detection is similar to that of kmemcheck: use shadow memory to record whether each byte of memory is safe to access, and use compile-time instrumentation to insert checks of shadow memory on each memory access.

From a high level, our approach to memory error detection is similar to kmemcheck. Use shadow memory to record whether access is safe in each byte of memory, and insert a check of shadow memory at each memory access at compile time.

Generic KASAN dedicates 1/8th of kernel memory to its shadow memory (e.g. 16TB to cover 128TB on x86_64) and uses direct mapping with a scale and offset to translate a memory address to its corresponding shadow address.

Generic KASAN allocates 1/8 of the kernel memory to shadow memory (for example, 16TB to cover 128TB for x86_64). Then direct mapping to the shadow address associated with the memory address by scale and offset.

Here is the function which translates an address to its corresponding shadow address:

Here, the function that translates the address associated with the shadow address is ...


static inline void *kasan_mem_to_shadow(const void *addr)
{
    return ((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
            + KASAN_SHADOW_OFFSET;
}

where KASAN_SHADOW_SCALE_SHIFT = 3.

Compile-time instrumentation is used to insert memory access checks. Compiler inserts function calls (__asan_load*(addr), __asan_store*(addr)) before each memory access of size 1, 2, 4, 8 or 16. These functions check whether memory access is valid or not by checking corresponding shadow memory.

The compile-time instrumentation is used to insert the memory access check. The Compiler inserts the function calls kasan_load * (addr), __asan_store * (addr)) before any memory access of size 1,2,4,8 is made. These functions check if memory access is enabled by checking the associated shadow memory.

GCC 5.0 has possibility to perform inline instrumentation. Instead of making function calls GCC directly inserts the code to check the shadow memory. This option significantly enlarges kernel but it gives x1.1-x2 performance boost over outline instrumented kernel.

With GCC 5.0, it is possible to perform inline instrumentation. Instead of making a function call, GCC inserts the code that shadow memory checks directly. This makes the kernel slightly bloated, but provides x1.1-x2 performance acceleration compared to the outline instrumented kernel.

Software tag-based KASAN

Tag-based KASAN uses the Top Byte Ignore (TBI) feature of modern arm64 CPUs to store a pointer tag in the top byte of kernel pointers. Like generic KASAN it uses shadow memory to store memory tags associated with each 16-byte memory cell (therefore it dedicates 1/16th of the kernel memory for shadow memory).

Tag-based KASAN records the pointer tag in the tio byte of the kernel pointer using the Top Byte Ignore (TBI) features, which is a feature of the latest arm64 CPU. Like generi KASAN, shadow mamory is used to record the memory tag associated with each 16byte memory cell (so shadow memory occupies 1/16 of kernel memory).

On each memory allocation tag-based KASAN generates a random tag, tags the allocated memory with this tag, and embeds this tag into the returned pointer. Software tag-based KASAN uses compile-time instrumentation to insert checks before each memory access. These checks make sure that tag of the memory that is being accessed is equal to tag of the pointer that is used to access this memory. In case of a tag mismatch tag-based KASAN prints a bug report.

For each memory allocation, tag-based KASAN will generate a random tag. Allotted memory is tagged with this tag, and the returned pointer is embedded with this tag. Software tag-based KASAN uses compile-time instrumentation and inserts a check before each memory access. This check verifies that the tag of the memory being accessed matches the tag of the pointer used to access this memory. If the tags do not match, tag-based KASAN will print a bug report.

Software tag-based KASAN also has two instrumentation modes (outline, that emits callbacks to check memory accesses; and inline, that performs the shadow memory checks inline). With outline instrumentation mode, a bug report is simply printed from the function that performs the access check. With inline instrumentation a brk instruction is emitted by the compiler, and a dedicated brk handler is used to print bug reports.

Software tag-based KASAN also has two instrumentation modes. (Issue a callmback to check outline, memory access, and inline runs shadow memory check inline). In outline instrumentation mode, a bug report is simply output from the function that executed the access check. In inline instrumentation, brk instructions are issued by the compiler and the brk handler is used exclusively to output bug reports.

A potential expansion of this mode is a hardware tag-based mode, which would use hardware memory tagging support instead of compiler instrumentation and manual shadow memory manipulation.

A potential extension of this mode is the hardware tag-based mode, where hardware supports memory tagging instead of compiler instrumentation or manual shadow memory manipulation.

What memory accesses are sanitised by KASAN?

The kernel maps memory in a number of different parts of the address space. This poses something of a problem for KASAN, which requires that all addresses accessed by instrumented code have a valid shadow region.

The kernel maps some memory where the adrec space is different. This raises the issue in KASAN. All addresses are required to be accessed by an instrumented code with a normal shadow region.

The range of kernel virtual addresses is large: there is not enough real memory to support a real shadow region for every address that could be accessed by the kernel.

The range of virtual addresses in the kernel is very large. The actual memory is unlikely to be able to support all the actual shadow regions accessed that the kernel will access.

By default

By default, architectures only map real memory over the shadow region for the linear mapping (and potentially other small areas). For all other areas - such as vmalloc and vmemmap space - a single read-only page is mapped over the shadow area. This read-only shadow page declares all memory accesses as permitted.

By default, the architecture maps the actual memory to the shadow region only by linear mapping (potentially small other areas). For all other areas, for example in the vmalloc and vmemmmap spaces, a simple read only page maps to the shadow area. This read only shadow page declares all allowed memory access.

This presents a problem for modules: they do not live in the linear mapping, but in a dedicated module space. By hooking in to the module allocator, KASAN can temporarily map real shadow memory to cover them. This allows detection of invalid accesses to module globals, for example.

This causes problems for the module. These are not valid for linear mapping and reside in their own module space. By hooking within the moduke allocator, KASAN temporarily provides real shadow memory and a temporary mapping that covers them. This allows, for example, to detect unauthorized access to module global.

This also creates an incompatibility with VMAP_STACK: if the stack lives in vmalloc space, it will be shadowed by the read-only page, and the kernel will fault when trying to set up the shadow data for stack variables.

This also creates an incompatibility with VMAP_STACK. If the stack is in the vmalloc space, it will be hidden by the read-only page and the kernel will fail when trying to set up shadow data for the shack variable.

CONFIG_KASAN_VMALLOC

With CONFIG_KASAN_VMALLOC, KASAN can cover vmalloc space at the cost of greater memory usage. Currently this is only supported on x86.

Setting CONFIG_KASAN_VMALLOC allows KASAN to cover the vmalloc space as well, depending on the cost of memory usage. At the moment this is only supported on x86.

This works by hooking into vmalloc and vmap, and dynamically allocating real shadow memory to back the mappings.

This works by hooking vmalloc and vmap. It then dynamically allocates real shadow memory to return mapping.

Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Many mappingu are small in vmalloc space and make smaller requests than the full page in shadow space. Securing a full shadow page with the mapping is useless. In addition, if different mappings map different shadow pages, the mapping must be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

Instead, we share backing space across multiple mappings. We allocate a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on.

Instead, it keeps a backing space across multiple mappings. The backing page is reserved when the vmalloc space uses some pages in the shadow region. This page will be shared after other malloc mappings have been made.

We hook in to the vmap infrastructure to lazily clean up unused shadow memory.

Hook vmap infrastructre to defer and clear unused shadow memory.

To avoid the difficulties around swapping mappings around, we expect that the part of the shadow region that covers the vmalloc space will not be covered by the early shadow page, but will be left unmapped. This will require changes in arch-specific code.

anticipate the shadow region part to avoid dealing with the difficult parts around swapping mapping aroud. That is, the vmalloc space is not covered by the early shadow page. However, there are cases where upn mapping remains. This required a change request within the arch-specifion.

This allows VMAP_STACK support on x86, and can simplify support of architectures that do not have a fixed module region.

Support for VMAP_STACK is allowed on x86. It can support architectures that do not have a fixed module region.

Recommended Posts

The Kernel Address Sanitizer (KASAN) (2/2)
The Kernel Address Sanitizer (KASAN) (1/2)
What is the Linux kernel?
Try the Linux kernel lockdown mechanism
Get the address from the zip code