Dynamic Systems and Learning in the Model Space

Tutorial Topic


         The topic of the tutorial includes:

  1. Finite state machines – We will describe their fitting to the data, smoothing regularizations in cases of insufficient data and efficient implementations in terms of fractal prediction machines.
  2. hidden Markov models – simple examples of probabilistic models with un- observable finite state space. We will explain how the problem of unobservable state space gets manifested in parameter estimation – latent variable model with e.g. EM-style maximum likelihood estimation + exponential explosion of possible state paths – and will show how can this be dealt with efficiently.
  3. recurrent neural networks – non-probabilistic parameterized state space mod- els with continuous unobservable state space. We will explain how the problem of continuous unobservable state space gets manifested in parameter estimation – information latching problem.
  4. reservoir models – an important subclass of recurrent networks with non- trainable dynamic part (reservoir). We will explain the motivation (dealing with information latching problem of recurrent nets) and theoretical advances, as well as successful application of this (still very active) area of research.
  5. Generation of model space - We will explain the approaches to use the dynamic systems to represent the temporal signals, and the representation and discrimination ability of model space will be discussed.
  6. The metric in the model space based on the functional analysis - We will introduce some fundamental theory on the definition of the metric in the model space, and how to measure the distance between dynamic models.
  7. The learning algorithms in the model space - By defining the model distance in the model space, we will explain how to incorporate the existing learning algorithms in the model space.

Rationale


        Traditional machine learning for temporal data, such as time series, relies on the representation of data space. With the recent big data era, "learning in the model space" has been proposed to provide more robust and compact representation than the data space and to provide the potential more explanation of the approach. The core idea of "learning in the model space" is to use dynamic models fitted on parts of the data as more stable and parsimonious representations of the data. Learning is then performed directly in the model space instead of the original data space.

        By transferring the data space into model space, the complete data set is represented by a relatively small numbers of models. The dynamic model space is a functional space with these local models as points in that space. The novel theory and algorithm in the model space can improve the generalization ability of machine learning algorithms. In addition, the learning in the model space could open the "black box" of machine learning algorithms and brings more explanations to both the data and learning models.

        In this tutorial we will present an unified view of dynamic systems as non-autonomous input-driven systems. In addition, we will focus on the three core questions in the model space for temporal data, including the generation of model space, the measure metric of the model space and the learning algorithms in the dynamic model space. The tutorial introduces the theory and algorithms on generation of model space and the presentation ability and classification ability in the model space, the metric based on functional analysis in the model space, and the online learning algorithm in the model space. In this tutorial, we will also demonstrate how to use dynamic systems to represent nonlinear multi-input multi-output (MIMO) system.

References


  • [1] Huanhuan Chen, Peter Tino, Ali Rodan, Xin Yao "Learning in the Model Space for Cognitive Fault Diagnosis." IEEE Transactions on Neural Networks and Learning Systems , 25(1): 124-136, 2014.
  • [2] Huanhuan Chen, Fengzhen Tang, Peter Tino, Xin Yao. "Model-based kernel for efficient time series analysis." In Proceedings of the 19th ACM SIGKDD international conference on Knowledge Discovery and Data Mining (KDD'13). Chicago, USA, August 11-14, 2013.
  • [3] Huanhuan Chen, Peter Tiňo, and Xin Yao. "Cognitive Fault Diagnosis in Tennessee Eastman Process using Learning in the Model Space." Computers &Chemical Engineering, 67: 33–42, 2014.
  • [4] A.Rodan, P.Tino: Simple Deterministically Constructed Cycle Reservoirs with Regular Jumps. Neural Computation, 24(7), pp. 1822-1852, 2012.
  • [5] A.Rodan, P.Tino:Minimum Complexity Echo State Network. IEEE Transactions on Neural Networks, 22(1), pp 131-144, 2011.
  • [6] P.Tino, I.Farkas, J.van Mourik: Dynamics and Topographic Organization of Recursive Self-Organizing Maps. Neural Computation, 18(10), pp. 2529-2567, 2006.

Relevance for IJCNN


        In the neural network society there has been much interest in investigating recurrent neural networks, dynamic systems and its applications to learning. Recently, there is also a special issue on Learning in Non-(geo)metric Spaces in IEEE Transactions on Neural Networks and Learning Systems. Since temporal data modelling and learning have been extensively investigated in the neural network community, an in-depth understanding of dynamic systems and learning in the model space will lead to wider applications.

        The participants are expected to learn both practical algorithms and fundamental theory for learning in the dynamic model space. Insight into specific state-of-the-art methods will be presented. Little prior knowledge is assumed (other than basic data mining and machine learning techniques).

Anticipated Enrolment


        We expect that the tutorial will be of interest to a wide variety of IJCNN attendees, in particular

  • theoreticians and practitioners working with data exhibiting temporal dependencies
  • researchers interested in dynamical systems and their applications as modelling tools
  • young researchers (PhD students, fresh post-docs) possibly perplexed by the va- riety of modelling frameworks for temporally dependent data, who would benefit from the unifying viewpoint on which this tutorial is based

        NOTE: No prior knowledge is required. Key concepts will be illustrated in an accessible way.

Short bio of organizers


Huanhuan Chen
hchen@ustc.edu.cn

        Huanhuan Chen received the B.Sc. degree from the University of Science and Technology of China, Hefei, China, in 2004, and Ph.D. degree, sponsored by Dorothy Hodgkin Postgraduate Award (DHPA), in computer science at the University of Birmingham, Birmingham, UK, in 2008. He is a professor in School of Computer Science, University of Science and Technology of China. His research interests include machine learning, data mining and evolutionary computation. His PhD thesis on ensemble learning "Diversity and Regularization in Neural Network Ensembles" has received 2011 IEEE Computational Intelligence Society Outstanding PhD Dissertation Award (the only winner) and 2009 CPHC/British Computer Society Distinguished Dissertations Award (the runner up). His publication on “Probabilistic Classification Vector Machines” has received IEEE Transactions on Neural Networks Outstanding 2009 Paper Award (bestowed in 2011 and the only winner in that year). Recently, Dr. Chen received the International Neural Network Society (INNS) Young Investigator Award in 2015 for his significant contributions in the field of Neural Networks.

University of Science and Technology of China         web:http://staff.ustc.edu.cn/~hchen/


Peter Tino
P.Tino@cs.bham.ac.uk

        Peter Tino has a direct and extensive research experience with theoretical and practical issues related to (probabilistic and non-probabilistic) modelling and learning of temporal data in the context of both supervised and unsupervised learning. He has published in Neural Computation, IEEE Trans. on Pattern Analysis and Machine Intelligence, IEEE Trans. on Neural Networks, IEEE Trans. on Evolutionary Computation and Machine Learning. (See full list of publications). Peter is an associate editor of Scientific Reports (Nature Publishing), IEEE Transactions on Neural Networks and Learning Systems (IEEE), Pattern Analysis and Applications (Springer) and Neural Processing Letters (Springer). He has co-chaired several program committees of international conferences and served as area chair for recurrent networks at ICANN 2011; Vice Chair of Neural Networks Technical Committee of the IEEE Computational Intelligence Society and member of several other IEEE Computational Intelligence Society Technical Committees. Peter has received several research awards such as IEEE Transactions on Neural Networks Outstanding Paper Award (1998, 2011), IEEE Transactions on Evolutionary Computation Outstanding Paper Award (2010) and UK-Hong Kong Fellowship for Excellence.

University of Birmingham         web:http://www.cs.bham.ac.uk/~pxt/