Dynamic Systems and Learning in the Model Space
The topic of the tutorial includes:
Traditional machine learning for temporal data, such as time series, relies on the representation of data space. With the recent big data era, "learning in the model space" has been proposed to provide more robust and compact representation than the data space and to provide the potential more explanation of the approach. The core idea of "learning in the model space" is to use dynamic models fitted on parts of the data as more stable and parsimonious representations of the data. Learning is then performed directly in the model space instead of the original data space.
By transferring the data space into model space, the complete data set is represented by a relatively small numbers of models. The dynamic model space is a functional space with these local models as points in that space. The novel theory and algorithm in the model space can improve the generalization ability of machine learning algorithms. In addition, the learning in the model space could open the "black box" of machine learning algorithms and brings more explanations to both the data and learning models.
In this tutorial we will present an unified view of dynamic systems as non-autonomous input-driven systems. In addition, we will focus on the three core questions in the model space for temporal data, including the generation of model space, the measure metric of the model space and the learning algorithms in the dynamic model space. The tutorial introduces the theory and algorithms on generation of model space and the presentation ability and classification ability in the model space, the metric based on functional analysis in the model space, and the online learning algorithm in the model space. In this tutorial, we will also demonstrate how to use dynamic systems to represent nonlinear multi-input multi-output (MIMO) system.
In the neural network society there has been much interest in investigating recurrent neural networks, dynamic systems and its applications to learning. Recently, there is also a special issue on Learning in Non-(geo)metric Spaces in IEEE Transactions on Neural Networks and Learning Systems. Since temporal data modelling and learning have been extensively investigated in the neural network community, an in-depth understanding of dynamic systems and learning in the model space will lead to wider applications.
The participants are expected to learn both practical algorithms and fundamental theory for learning in the dynamic model space. Insight into specific state-of-the-art methods will be presented. Little prior knowledge is assumed (other than basic data mining and machine learning techniques).
We expect that the tutorial will be of interest to a wide variety of IJCNN attendees, in particular
NOTE: No prior knowledge is required. Key concepts will be illustrated in an accessible way.
Huanhuan Chen received the B.Sc. degree from the University of Science and Technology of China, Hefei, China, in 2004, and Ph.D. degree, sponsored by Dorothy Hodgkin Postgraduate Award (DHPA), in computer science at the University of Birmingham, Birmingham, UK, in 2008. He is a professor in School of Computer Science, University of Science and Technology of China. His research interests include machine learning, data mining and evolutionary computation. His PhD thesis on ensemble learning "Diversity and Regularization in Neural Network Ensembles" has received 2011 IEEE Computational Intelligence Society Outstanding PhD Dissertation Award (the only winner) and 2009 CPHC/British Computer Society Distinguished Dissertations Award (the runner up). His publication on “Probabilistic Classification Vector Machines” has received IEEE Transactions on Neural Networks Outstanding 2009 Paper Award (bestowed in 2011 and the only winner in that year). Recently, Dr. Chen received the International Neural Network Society (INNS) Young Investigator Award in 2015 for his significant contributions in the field of Neural Networks.
University of Science and Technology of China web:http://staff.ustc.edu.cn/~hchen/
Peter Tino has a direct and extensive research experience with theoretical and practical issues related to (probabilistic and non-probabilistic) modelling and learning of temporal data in the context of both supervised and unsupervised learning. He has published in Neural Computation, IEEE Trans. on Pattern Analysis and Machine Intelligence, IEEE Trans. on Neural Networks, IEEE Trans. on Evolutionary Computation and Machine Learning. (See full list of publications). Peter is an associate editor of Scientific Reports (Nature Publishing), IEEE Transactions on Neural Networks and Learning Systems (IEEE), Pattern Analysis and Applications (Springer) and Neural Processing Letters (Springer). He has co-chaired several program committees of international conferences and served as area chair for recurrent networks at ICANN 2011; Vice Chair of Neural Networks Technical Committee of the IEEE Computational Intelligence Society and member of several other IEEE Computational Intelligence Society Technical Committees. Peter has received several research awards such as IEEE Transactions on Neural Networks Outstanding Paper Award (1998, 2011), IEEE Transactions on Evolutionary Computation Outstanding Paper Award (2010) and UK-Hong Kong Fellowship for Excellence.