kmem_cache, SLAB & SLUB
The Linux kmem_cache is part of the kernel's infrastructure for handling memory allocations and de-allocations that cannot experience delays (e.g. process creation, network packet handling). To accomplish this, memory pools for a particular structure type and size are pre-allocated upon system startup. Then, when a structure allocation is requested, the request can be immediately serviced without needing to allocate more regions. Also, the allocator keeps track of all free pages and can reuse chunks immediately after they are freed.
From a memory forensics perspective, we can leverage these features in a few ways. First, we can use active (allocated) structures in order to find instances of a particular structure type. This is best seen in the kmem_cache column of linux_psxview. This column is populated by enumerating all of the task structures active in the task_cachep kmem_cache at the time of the memory capture.
The second way we can leverage kmem_cache is to walk the free lists and recover as much information as possible from de-allocated entries. Structure members that are stored within the structure, such as simple integers or character arrays, will be left in-tact until the host structure is re-used and overwritten. Depending on how much time lapsed between de-allocation and the memory capture being taken, we may even be able to recover reference values as the pointer can still point to valid data.
The internals of these memory allocations and de-allocations depend entirely on the allocator chosen. There are currently two main allocators in-use, SLAB and SLUB, and they have a large effect on which objects are recoverable. SLAB was the original allocator and left many freed entries active and kept direct references to all active objects. SLUB on the other hand makes life much more difficult in both instances.
Luckily for us, all Android ROMs we have tested chose to use the SLAB allocator. This means that we cannot not only find active structures at will, but also recover a wealth of previous information "forgotten" by the operating system.
Recovering Data from kmem_cache
linux_vma_cache
Each memory mapped file in Linux is represented by a vm_area_struct structure. The vma kmem_cache is responsible for allocating and de-allocating them.
$ python vol.py --profile=LinuxEvo4GARM -f Evo4GRodeo.lime linux_vma_cache
Process PID Start End Path
---------------- ------ ---------- ---------- ----
0x48a4b000 0x48a4c000
0x46dfa000 0x46e12000
0x45496000 0x454a0000 app/htccalendarwidgets.apk
0x45136000 0x45137000 app/htccalendarwidgets.apk
0x443da000 0x443db000 app/htccalendarwidgets.apk
0x4513e000 0x451d6000 app/HtcDialer.odex
In this we can see information on each mapping, including the path. Note that in this particular sample the Process and PID columns are empty due to the kernel not implementing the 'owner' member of mm_struct (the parent struct of a process' memory mappings). For kernels with this option enabled the columns would be populated.
If we pass the -u flag to any of the kmem_cache plugins, the unallocated entries will be recovered. For memory mappings this can be extremely useful as it shows files and paths that were mapped into a processes' address space, even if the process has since exited or the file deleted from disk.
linux_pslist_cache
This plugin recovers processes and their associated information. Data found includes the process ID, user ID and group ID, and the start time of the process. By recovering unallocated entries, not only can we determine processes that previously executed, but we can include their start time within a timeline of in-memory data.
$ python vol.py --profile=LinuxEvo4GARM -f Evo4GRodeo.lime linux_pslist_cache
0xcafb6000 Binder Thread # 1856 10034 10034 0x23d80000 2012-08-05 02:32:21 UTC+0000
0xcafb6400 oid.voicedialer 1841 10087 10087 0x27d78000 2012-08-05 02:32:21 UTC+0000
0xcafb6c00 HeapWorker 1842 10087 10087 0x27d78000 2012-08-05 02:32:21 UTC+0000
0xc3d3a400 Signal Catcher 884 10092 10092 0x28d7c000 2012-08-05 02:21:44 UTC+0000
0xc3d3a800 RefQueueWorker@ 1647 10009 10009 0x2cc04000 2012-08-05 02:30:49 UTC+0000
0xc3d3ac00 com.htc.bg 1157 10009 10009 0x2cc04000 2012-08-05 02:22:16 UTC+0000
0xc8ecc000 Binder Thread # 888 10092 10092 0x28d7c000 2012-08-05 02:21:44 UTC+0000
linux_dentry_cache
Each opened and/or mapped file is represented by a file structure that contains a pointer to a dentry structure that holds the name of the file as well as other metadata. Enumeration of this cache gathers all opened files across the system. This can be very useful to locate malware-specific files or processes that are interacting with files they should not. Unallocated entries can also be used to find files that were previously opened and now deleted. Use of this plugin reports all of the files across in the cache in body file format, so you can immediately add the MAC times of recovered files to your timeline.
$ python vol.py --profile=LinuxEvo4GARM -f Evo4GRodeo.lime linux_dentry_cache
0|data/com.htc.socialnetwork.flickr/databases/flickr.db|1140|0|10009|10009|7168|3550353848|3550353856|0|3550353864
0|data/com.htc.socialnetwork.provider|724|0|10009|10009|2048|3550256496|3550256504|0|3550256512
0|data/com.htc.socialnetwork.provider/databases|777|0|10009|10009|2048|3550941704|3550941712|0|3550941720
0|data/com.htc.socialnetwork.provider/databases/SocialNetwork.db|797|0|10009|10009|30720|3550846648|3550846656|0|3550846664
0|data/com.htc.sync.provider.weather|607|0|10009|10009|2048|3550345296|3550345304|0|3550345312
0|data/com.htc.sync.provider.weather/databases|1173|0|10009|10009|2048|3550522768|3550522776|0|3550522784
0|data/com.htc.sync.provider.weather/databases/weathersync.db|928|0|10009|10009|5120|3550520144|3550520152|0|3550520160
0|data/com.htc.wdm|627|0|1000|1000|2048|3550397888|3550397896|0|3550397904
0|data/com.htc.wdm/databases|871|0|1000|1000|2048|3550958056|3550958064|0|3550958072
0|data/com.htc.wdm/databases/wdm.db|881|0|1000|1000|7168|3550960024|3550960032|0|3550960040
0|data/com.l33t.seccncviewer|1270|0|10093|10093|2048|3550269112|3550269120|0|3550269128
0|data/com.l33t.seccncviewer/lib|1037|0|1000|1000|2048|3549299864|3549299872|0|3549299880
0|data/com.l33t.seccncviewer/lib/libl33tcrypto.so|1054|0|1000|1000|37840|3549301176|3549301184|0|3549301192
0|data/com.rosedata.android.rss|614|0|10071|10071|2048|3550213600|3550213608|0|3550213616
0|data/com.rosedata.android.rss/databases|926|0|10071|10071|2048|3548935288|3548935296|0|3548935304
0|data/com.rosedata.android.rss/databases/Data|1191|0|10071|10071|31744|3548934304|3548934312|0|3548934320
linux_mount_cache
Each mount point in the system is tracked through the kmem_cache. By recovering allocated entries, we can see all mountpoints on the system:
$ python vol.py --profile=LinuxEvo4GARM -f Evo4GRodeo.lime linux_mount_cache
none /acct cgroup rw,relatime
/sys/kernel/debug /sys/kernel/debug debugfs rw,relatime
sysfs /sys sysfs rw,relatime
proc /proc proc rw,relatime
devpts /dev/pts devpts rw,relatime
tmpfs /dev tmpfs rw,relatime
/dev/block/vold/179:1 /mnt/sdcard vfat rw,relatime,nosuid,nodev,noexec
/dev/block/vold/179:1 /mnt/secure/asec/.android_secure vfat rw,relatime,nosuid,nodev,noexec
[snip]
If we pass the -u option, we can find mount points that previously existed. This can be useful when external media was present on the system, but later removed.
Building New kmem_cache Plugins
Volatility currently only supports a few caches that we have found useful during investigations. If you find another cache that seems interesting (note: linux_slabinfo can be used to list them all), then you can leverage our simple API to gather all the structures of the same type.
If you read the dentry_cache plugin, you see that one simple line of code, that takes the name of the cache and type of each cache member as parameters, is all that is needed to enumerate from kmem_cache. The API determines if the system is using SLAB or SLUB, enumerates the proper entries, and instantiates them as the chosen type.
Conclusion
This post has highlighted a number of plugins that can help discover the current and historical context of a number of user and application (malware) related activities on an Android system. Incorporating kmem_cache analysis into this process is a huge advantage for investigators and can recover details not accessible by any other memory forensics tool.
Thanks,good work!
ReplyDeleteI create my android profile and use volatility2.3 to analyse the memory,but the volatility saied it can't identify the profile and no suitable address found...
I am confused and can't find the solution, can give me some advice?
Thanks a lot !
______________
Best wishes~
yutruth