當前位置:

Data Driven Network Control with Reinforcement Learning


題目:Data Driven Network Control with Reinforcement Learning

報告時間:20181217日(星期一)上午10

報告地點:新科技樓1702會議室

報告人:崔曙光教授 香港中文大學(深圳)

內容簡介:

We first start with a brief introduction of Reinforcement Learning (RL) and then discuss its applications in self-organizing networks. The first application is on handover control: We propose a two-layer framework to learn the optimal HO controllers in possibly large-scale wireless systems supporting mobile users, where the user mobility patterns could be heterogeneous. In particular, our proposed framework first partitions the User Equipments (UEs) with different mobility patterns into clusters, where the mobility patterns are similar in the same cluster. Then, within each cluster, an asynchronous multi-user deep RL scheme is developed to control the HO processes across the UEs in each cluster, in the goal of lowering the HO rate while ensuring certain system throughput. At each user, a deep-RL framework with LSTM RNN is used. We show that the adopted global-parameter-based asynchronous framework enables us to train faster with more UEs, which could nicely address the scalability issue to support large systems. The second application is on joint energy and access control in energy harvesting wireless systems, where we show that a double-deep-RL solution could lead us to significant system gains.

個人介紹:

崔曙光教授,2005年獲得斯坦福大學的電子工程專業博士學位,現擔任美國德克薩斯農工大學電氣與計算機工程系的教授職位。崔曙光是網絡信息處理,特別是傳感器網絡和物聯網領域權威學者,進入湯森路透全球“高引用科學家”名單。現任深圳市大數據研究院副院長和香港中文大學(深圳)校長講席教授。

崔曙光的研究論文被廣泛地引用,在2014年當選為Thomson Reuters高被引科學家,并被Sciencewatch列為世界最具影響力科學家之一。他榮獲了IEEE Signal Processing Society 2012最佳論文獎,也是兩次最佳會議論文的獲得者。他一直在擔任多個專業會議、期刊和委員會的主席、分區主編或副主編。他在2013年當選了IEEE Fellow,并在2014年當選IEEE通信協會杰出講師。


雷達信號處理國家級重點實驗室 信息與通信工程學部

雷達認知探測成像識別“111基地” 國際合作與交流處