January 22, 2016
Chris Rohlf
Update 2/9/2019 - You can find HardenedPartitionAlloc here
In a previous article I ported an existing third party library in Chrome, PDFium, to use PartitionAlloc for a majority of its memory allocations. PartitionAlloc is the heap allocator in Chrome with built-in exploit mitigations. It substantially raises the bar for exploitation of use-after-free vulnerabilities in Chrome primarily through heap isolation. In this second article I will explore some of the implementation details of PartitionAlloc and introduce a patch I wrote to harden it further.
To understand the PartitionAlloc design choices it is helpful to understand one of its predecessors. There used to be a little known piece of code in WebKit named RenderArena. RenderArena was a slab based allocator that managed all memory allocation for any C++ object that inherited from the RenderObject class via overloaded new and delete operators. Like other parts of WebKit, the Render code at the time was notoriously buggy and contained many use-after-free vulnerabilities. Unlike similar vulnerable code patterns in WebCore, exploiting RenderObject use-after-free vulnerabilities proved rather difficult due to the nature of this allocator. This was because, by design, you could only allocate other objects in the same heap if they inherited from RenderObject. This design choice makes the typical approach to exploiting use-after-free vulnerabilities in Chrome, (replacing free'd WebCore objects with JavaScript objects), a non-starter. Despite the WebKit project attempting to remove RenderArena some Chrome engineers recognized this security property and wanted to preserve it. PartitionAlloc was created with the same security property as RenderArena, object and/or size specific heaps. It eventually replaced RenderArena for RenderObjects and became the default allocator for other types of objects in Chrome including DOM nodes in WebCore.
Using the PartitionAlloc API is easy and the authors added comments that describe it in detail. The first thing we need to do is create a PartitionAlloc class and then initialize its root member. Our options are PartitionAllocatorGeneric and SizeSpecificPartitionAllocator classes. The former tells PartitionAlloc to choose the right storage bucket for our object, and the latter is a template you must pass a size to in the declaration. The size you define sets the upper bound on allocation sizes within that partition. If we try to allocate larger objects, the allocation will fail. This is helpful in limiting the number of objects that can be used in a use-after-free exploit. We create these class instances anywhere we want to allocate memory or create objects, call init on them, and then use the allocation functions like we would malloc. The only difference is we need to pass our partition root along with the size we are requesting. Here is an example that shows a simple use case of both classes:
// SizeSpecificPartitionAllocator example usage SizeSpecificPartitionAllocator<1024> mySzSpecificAllocator; mySzSpecificAllocator.init(); void *p = partitionAlloc(mySzSpecificAllocator.root(), 128); PartitionFree(p); mySzSpecificAllocator.shutdown(); // PartitionAllocatorGeneric example usage PartitionAllocGeneric myGenericAllocator; myGenericAllocator.init(); void *p = partitionAllocGeneric(myGenericAllocator.root(), 128); partitionFreeGeneric(myGenericAllocator.root(), p); myGenericAllocator.shutdown();Using PartitionAlloc to allocate and free memory is as easy as any other heap allocator. Its these implementation details, and how it makes exploitation harder, that set it apart. When a call to partitionAlloc is made the partitionBucketAlloc function is invoked. This is where the decision to take the hot path or the slow path is made.
ALWAYS_INLINE void* partitionBucketAlloc(PartitionRootBase* root, int flags, size_t size, PartitionBucket* bucket) { PartitionPage* page = bucket->activePagesHead; // Check that this page is neither full nor freed. ASSERT(page->numAllocatedSlots >= 0); void* ret = page->freelistHead; if (LIKELY(ret != 0)) { // ... hot path ... } else { // ... slow path ... ret = partitionAllocSlowPath(root, flags, size, bucket); }The Hot Path
// The following security properties could be investigated in the future: // - Per-object bucketing (instead of per-size) is mostly available at the API, // but not used yet. // - No randomness of freelist entries or bucket position. // - Better checking for wild pointers in free(). // - Better freelist masking function to guarantee fault on 32-bit.
| 0xabcd1234 [Vulnerable char array (1)] | ... v 0xabcd1300 [Function pointer table (2)]By randomizing the order of the freelist we remove the predictability the attacker is relying on when the second buffer is allocated directly adjacent to the first. The allocator may return a pointer to a freelist entry behind the vulnerable buffer or an unpredictable number of bytes in front of it. This isn't a foolproof protection, there are still ways an exploit can bypass this randomization but its one more bar the exploit developer must bypass. The second protection may help detect and stop double free exploits that attempt to place non-heap buffers into a PartitionAlloc freelist.
@@ -142,6 +143,10 @@ void partitionAllocInit(PartitionRoot* root, size_t numBuckets, size_t maxAlloca { parititonAllocBaseInit(root); + // Reseed the RNG using time() with some bits + // from the current random pool + srand(time(NULL) + (rand() % 100000)); +
@@ -564,16 +569,31 @@ static ALWAYS_INLINE char* partitionPageAllocAndFillFreelist(PartitionPage* page page->numUnprovisionedSlots = numSlots; page->numAllocatedSlots++; + std::vector< PartitionFreelistEntry *> vfreelist; + if (LIKELY(numNewFreelistEntries)) { char* freelistPointer = firstFreelistPointer; PartitionFreelistEntry* entry = reinterpret_cast< PartitionFreelistEntry*>(freelistPointer); page->freelistHead = entry; + vfreelist.push_back(entry); + while (--numNewFreelistEntries) { freelistPointer += size; - PartitionFreelistEntry* nextEntry = reinterpret_cast< PartitionFreelistEntry*>(freelistPointer); + vfreelist.push_back(reinterpret_cast< PartitionFreelistEntry*>(freelistPointer)); + } + + std::random_shuffle(vfreelist.begin(), vfreelist.end()); + + entry = vfreelist[vfreelist.size()-1]; + vfreelist.pop_back(); + page->freelistHead = entry; + + for(int i=0;i < vfreelist.size();i++) { + PartitionFreelistEntry* nextEntry = vfreelist[i]; entry->next = partitionFreelistMask(nextEntry); entry = nextEntry; } + entry->next = partitionFreelistMask(0); } else { page->freelistHead = 0;
@@ -579,15 +582,37 @@ ALWAYS_INLINE void* partitionBucketAlloc(PartitionRootBase* root, int flags, siz ASSERT(page->numAllocatedSlots >= 0); void* ret = page->freelistHead; if (LIKELY(ret != 0)) { + PartitionFreelistEntry* t = page->freelistHead; + PartitionFreelistEntry* z = t; + + while(t) { + if((rand() % 10) == 1) { + break; + } + + z = t; + t = partitionFreelistMask(t->next); + } + + if(t) + ret = t; + // If these asserts fire, you probably corrupted memory. ASSERT(partitionPointerIsValid(ret)); // All large allocations must go through the slow path to correctly // update the size metadata. ASSERT(partitionPageGetRawSize(page) == 0); - PartitionFreelistEntry* newHead = partitionFreelistMask(static_cast< PartitionFreelistEntry*>(ret)->next); - page->freelistHead = newHead; + + if(ret == page->freelistHead) { + PartitionFreelistEntry* newHead = partitionFreelistMask(static_cast< PartitionFreelistEntry*>(ret)->next); + page->freelistHead = newHead; + } else { + z->next = t->next; + } +
@@ -842,9 +862,30 @@ void* partitionAllocSlowPath(PartitionRootBase* root, int flags, size_t size, Pa // usable freelist head. if (LIKELY(newPage->freelistHead != nullptr)) { PartitionFreelistEntry* entry = newPage->freelistHead; - PartitionFreelistEntry* newHead = partitionFreelistMask(entry->next); - newPage->freelistHead = newHead; + PartitionFreelistEntry* z = entry; + + while(entry) { + if((rand() % 10) == 1) { + break; + } + + z = entry; + entry = partitionFreelistMask(entry->next); + } + + if(entry == nullptr) { + entry = newPage->freelistHead; + } + + if(entry == newPage->freelistHead) { + PartitionFreelistEntry* newHead = partitionFreelistMask(entry->next); + newPage->freelistHead = newHead; + } else { + z->next = entry->next; + } +
@@ -647,6 +672,14 @@ ALWAYS_INLINE void partitionFreeWithPage(void* ptr, PartitionPage* page) ASSERT(!freelistHead || partitionPointerIsValid(freelistHead)); RELEASE_ASSERT_WITH_SECURITY_IMPLICATION(ptr != freelistHead); // Catches an immediate double free. ASSERT_WITH_SECURITY_IMPLICATION(!freelistHead || ptr != partitionFreelistMask(freelistHead->next)); // Look for double free one level deeper in debug. + + PartitionFreelistEntry* f = freelistHead; + + while(f != NULL) { + RELEASE_ASSERT_WITH_SECURITY_IMPLICATION(f != ptr); + f = partitionFreelistMask(f->next); + } +