Tutorial Sessions
A total of 14 tutorials are planned for IEEE SSCI'15. Most of the tutorials will be scheduled on the first day of the conference, i.e. 8th December
Please note that attendance of the tutorial sessions will be free of charge
Accepted Tutorial Sessions
The tutorials are now available. Please click on the provided links to download.
Please click on the tutorial title to see the abstract and related information.

Tutorial Title:Towards Intelligent EnergyEfficient HyperDense Wireless Networks with Trillions of Devices
Presenters: Abolfazl Mehbodniya and Fumiyuki Adachi Tohoku University, Japan Related symposium: CICOMMS'15 Download Tutorial Here Abstract: The information and communication technology (ICT) data traffic is expected to increase 1,000 fold by 2020. This increasing demand is quickly draining the scarce radio resources and will eventually affect our nations' economy. This strongly motivates the need for intensive research on the next generation of wireless networks. Beyond conventional cellular data, machineto machine (M2M) and device to device (D2D) communication will be responsible for a big portion of the wireless traffic in the next few years. This will, in turn, further strain existing wireless infrastructure and require novel designs. According to recent forecasts, there will be 12.5 billion interconnected machinetype devices worldwide by the year 2020, up from 1.3 billion in 2012. For coping with such traffic growth, it is well known that the major technique for meeting a much needed 1000x capacity improvement will be a byproduct of massive network densification. The idea is to introduce heterogeneous networks (HetNets) having new, additional nodes, such as small cell base stations, deployed within localarea range and making the network closer to the endusers. The integration of macro/micro/pico/small cell base stations (SBSs) with disparate cell sizes and capabilities, has already been approved as a working item in LTEadvanced and 5G. Such hyperdense and heterogeneous networks (HDHNs) can significantly improve spatial frequency reuse and coverage, thus meeting the wireless capacity crunch. For example, it is envisioned that a viral and hyperdense deployment of lowcost small cells in the near future, with 200300 small cells per typical macro cell coverage, approaching onetoone ratio with the number of UEs. Such HDHNs are characterized by two unique features: a) massive number of devices and b) highly dynamic environment. How to manage, operate, and optimize such hyperdense, dynamic networks, in an energyefficient and sustainable manner, is an important research challenge that has recently received significant research interest from both academia and industry. The main goal of this tutorial is to introduce different aspects of designing HDHNs with advanced capabilities while focusing on spectralefficiency (SE) and energyefficiency (EE). In particular, we will introduce a plethora of techniques that include stochastic geometry, fuzzy logic, and gametheory that are necessary for deploying and operating largescale, selforganizing HDHNs that can be used to support various communication systems with seamless mobility.

Tutorial Title: CULTURAL ALGORITHMS: PUTTING SOCIAL NETWORKS TO WORK
Presenters: Robert G. Reynolds Wayne State University, USA Related symposium: SIS'15 Download Tutorial Here Abstract: Cultural Algorithms are an evolutionary algorithm designed to support learning in data intensive environment across a number of disparate domains. Is there a general framework in which the problems of CAs can be expressed?
In order to answer this question, Cultural Algorithms are viewed as a framework within which to develop and test principles of Social Physics (Pentland). First, Cultural Algorithms are described within social physics as an engine that does work within a physical system and uses Big Data to direct its actions. Then, a set of Social Metrics based upon the principles of social physics are described. We demonstrate that similar metric patterns are repeated in successful Cultural Algorithm runs for problems that range from simple to chaotic in entropic terms. We then show how these patterns can be explained within the Cultural Algorithm Social Physics Framework as a means of maximizing the work done by a Cultural Algorithm. Finally, when the patterns begin to deviate from those expected when the system was working at its maximum, then human intervention can be used to increase the work done by the system. Examples from real world applications will be presented.

Tutorial Title: The Theory of Neuronal Cognition
Presenter: Prof Claude Touzet AixMarseille University, France Related Symposium: General Download Tutorial Here  Touzet C (2015) "The Theory of Neural Cognition applied to Robotics", Int J Adv Robot Syst 12, doi: 10.5772/60693.
 Touzet C, Kermorvant C, Glotin H (2014) "A Biologically Plausible SOM Representation of the Orthographic Form of 50,000 French Words", in: Villmann, Th., Schleif, F.M., Kaden, M., Lange, M. (Eds.) Advances in SelfOrganizing Maps and Learning Vector Quantization, Springer AISC 295:303312.
 Touzet C (2014) Hypnosis, Sleep, Placebo: the answers from the Theory of neuronal Cognition – Tome 2, Ed. la Machotte, 166 pages. (in French) ISBN: 9782919411023.
 Touzet C (2013) "Why Neurons are Not the Right Level of Abstraction for Implementing Cognition", in A. Chella, R. Pirrone, K. Johannsdottir, R. Sorbello (Eds.) Biologically Inspired Cognitive Architectures 2012, AISC196:317318
Abstract: The Theory of neuronal Cognition (TnC) was formalized in 2010 [6]. It explains how a hierarchy of selforganizing maps is able to display cognitive abilities including consciousness, intelligence, emotions, planning, reading [2], creativity, motivation, joy [5], etc. TnC explanation power encompasses modified states of consciousness (placebo effect, hypnosis, sleep [3]) as well as many mental diseases symptoms and therapies (such as for Alzheimer's disease, autism or schizophrenia). Of even more interest for the IEEE SSCI participants is the fact that the computing requirement of the TnC are limited [1] – because the processing unit is not the neuron [4], but the cortical column (and there are only 160,000 cortical columns in the human brain). References:

Tutorial Title: Equation Discovery for Financial Modelling
Presenters: Dimitar Kazakov and Zhivko Georgiev University of York, UK Related Symposium: CIFEr Download Tutorial Here Abstract: This handson halfday tutorial will introduce the participants to some of the main ideas of equation discovery as a way of empirical modelling numerical data and use macro and microeconomic modelling as case studies. The open source Lagramge tool will be used in the practical part to demonstrate its ability to evaluate simultaneously thousands of nonlinear regression models defined through a custommade grammar of the possible equations to be considered. The exercises will include learning both ordinary and differential equation models. An inhouse evaluation tool for the differential equation models will also be made available. At the end of the tutorial, the participants should be capable of replicating existing experiments and modifying them to serve their own research needs.

Tutorial Title: A computational intelligence library
Presenters: Andries Englebrecht and Gary Pampara University of Pretoria, South Africa Related Symposium: General Download Tutorial Here This tutorial will provide a short introduction to a software library designed specifically to aid in CI research. This talk will firstly show that the current mechanisms and general workflow for CI research, has many pitfalls and misun derstandings that are not normally catered for. A discussion how these concerns are addressed by the software library will be shown and how these concerns are handled in a principled manner. The talk will also, briefly, introduce how functional programming can benefit the CI community by exploiting common abstractions and derived operations. The software library will be discussed, with focus on the core design decisions that have been made, for the sole purpose of aiding CI research. From these de signs, several CI specific abstractions have been identified, and formalized, that model CI algorithmic components, which facilitate the research process through allowing these individual pieces to be composed together thereby creating larger computational structures, which ultimately represent an algorithm definition. Examples of algorithmic formulation will be shown, and how such formu lations may be combined to create new formulations, which range from swarm intelligence algorithms (PSO, DE, GA, etc) to other algorithmic formulations such as cooperative evolution, hyper heuristics and multiobjective computa tion. Experimentation will be demonstrated, whereby algorithmic components are composed together in an experimental environment, removing the need to explicitly write programs to test ideas. As a result of this tutorial, researchers and practitioners will gain a better understanding of these very important aspects of principled experimental design, addressing the issues highlighted and receiving guidelines of how an open source software library will aid algorithmic verification and experimentation.

Tutorial Title:Learning in Nonstationary Environments: Perspectives and Applications
Presenters: Giacomo Boracchi and Gregory Ditzler Politecnico di Milano, Italy & University of Arizona, USA Related Symposium: CIDUE'15 Download Part 1 of the Tutorial Here and Part 2 Here Abstract: Many machine learning techniques make the assumption that training and testing data are sampled from the same probability distribution. Unfortunately, in an increasing number of realworld learning scenarios data arrive in a stream, and the probabilistic properties of the data generating process might be changing with time, violating the above assumption. Any algorithm or model that does not account for such change is almost certainly going to fail when data are sampled from a drifting or changing distribution, i.e, non stationary environment (NSE). The problem of learning in NSE has drawn much attentionin the last few years, particularly, in the classification literature where the problem is typically referred to as learning underconcept drift. Learning in NSE is a challenging problem because concept drift occurs unpredictably, and may change the data generating process into an unforeseen state. The literature boasts algorithms for learning in NSE, which can be though divided in two main learning strategies: (a) undergoing continuous adaptation to match the recent concept (passive approach), or (b) steadily monitoring the data stream to detect concept drift and eventually react (active approaches).

Tutorial Title:Evolutionary Algorithms and Reinforcement Learning: trends and perspectives
Presenter: Madalina M. Drugan Vrije Universiteit Brussel, Belgium Related Symposium: General Download Tutorial Here Abstract: A recent trend in evolutionary algorithms (EAs) transfers expertise from and to other areas of machine learning. An interesting novel symbiosis considers: i) reinforcement learning (RL), which learns online and offline difficult dynamic elaborated tasks requiring lots of computational resources, and ii) EAs with the main strengths in its eloquence and computational efficiency. These two techniques are addressing the same problem of maximization of the agent' reward in difficult stochastic environments that can include partial observations. Sometimes, they exchange techniques in order to improve their theoretical and empirical efficiency, like computational speed for online learning, and robust behaviour for the offline optimisation algorithms. For example, multiobjective RL uses tuples of rewards instead of a single reward value and techniques from multiobjective EAs should be integrated to improve the exploration/exploitation tradeoff for complex and large multiobjective environments. The problem of selecting the best genetic operator is similar to the problem that an agent faces when choosing between alternatives in achieving its goal of maximising its cumulative expected reward. Practical approaches select the RL method that solve best the online operator selection problem.
The scope of this tutorial is to discuss technical resemblances and differences in learning with RL and EAs in order to generate new synergies between the two methods. Although both paradigms optimize some quantity of interest, the methodology, terminology and the basic assumptions about the environment are quite different. For example, the exploitation/ exploration tradeoff has different meaning in EAs and RL.

Tutorial Title:Introduction to Dynamic Multiobjective Optimization and its Challenges
Presenter: Marde Helbig University of Pretoria, South Africa Related Symposium: MCDM'15 & CIDUE'15 Download Tutorial Here Abstract: Most optimization problems in reallife have more than one objective, with at least 2 objectives in conflict with one another and at least one objective that changes over time. These kinds of optimization problems are referred to as dynamic multiobjective optimization (DMOO) problems. Instead of restarting the optimization process after a change in the environment has occurred, previous knowledge is used and if the changes are small enough, this may lead to new solutions being found much quicker. Most research in multiobjective optimization has been conducted on static problems and most research on dynamic problems has been conducted on singleobjective optimization. The goal of a DMOO algorithm (DMOA) is to find an optimal set of solutions that is as close as possible to the true set of solutions (similar to static MOO) and that contains a diverse set of solutions. However, in addition to these goals a DMOA also has to track the changing set of optimal solutions over time. Therefore, the DMOA also has to deal with the problems of a lack of diversity and outdated memory (similar to dynamic singleobjective optimization). This tutorial will introduce the participants to the field of DMOO by discussing: benchmark functions and performance measures that have been proposed and the issues related to each of these; algorithms that have been proposed to solve DMOO; issues with comparison of DMOAs performance and ensuring a fair comparison; analysing the results and why traditional approaches used for static MOO is not necessarily adequate enough; challenges in the field that provide interesting research opportunities.

Tutorial Title:Multimodal recognition of mental states (emotions, dispositions, pain)
Presenters: Dr Holger Hoffmann^{§}, Prof Harald C. Traue ^{§}, Dr Steffen Walter^{§} & Dr. Friedhelm Schwenker ^{*}, Sascha Meudt^{*} Ulm University, Medical Psychology^{§}& Institute of Neural Information Processing^{*}, Germany Related Symposium: CIDM'15, CICARE'15 & CIR2AT'15 Download Tutorial Here  Theoretical framework of mental states
 Categories of mental states in HCI, health care and other fields
 The problem of reliability and validity of elicitation and observation of mental states
 Experimental elicitation of mental states for the training of classifiers
 Measurement of psychobiological and behavioral parameters
 Labeling tools, feature analysis, data mining and accuracy of classification processes
 Fusion of multimodal data for classifiers of mental states
 Comparison of various classification methods
 Applications of mental states in HCI, companion technologies and other fields
 Case studies for emotion, disposition and pain recognition
 Ethical considerations of automated recognition of mental states
Abstract: The tutorial covers a span of topic from theoretical concepts of mental states to application domain and case studies. First of all theoretical backgrounds of mental states will be presented and discussed. Possible and relevant candidates of mental states will be discussed in respect of applied fields, reliability and validity of training material, which can be produced in experimental settings. Labeling tools for structuring of natural occurring mental states in HCI will be compared with systematically produced experimental data. The application of data mining techniques and fusion results will be discussed on empirical data from cases studies of HCI and the monitoring of health data.

Tutorial Title: HyperHeuristics and Computational Intelligence
Presenter: Nelishia Pillay University of KwaZuluNatal, South Africa Related Symposium: General Download Tutorial Here Abstract: Hyperheuristics is a rapidly developing domain which has proven to be effective at providing generalized solutions to problems and across problem domains. Initially the focus of this research was combinatorial optimization, however since its inception the application of hyperheuristics is expanding and has made a contribution to various facets of computational intelligence. Hyperheuristics have successfully been applied to various domains including timetabling, vehicle routing, decision tree induction, packing problems, text classification, multiobjective optimization and dynamic environments amongst others. Hyperheuristics have also been making inroads into automated intelligent system design. This has ranged from parameter tuning, generation of components or operators, e.g. a crossover operator in a genetic algorithm or an annealing schedule in simulated annealing, to automating the flow of control in computational intelligence techniques and hybridization of computational intelligence techniques. The tutorial will provide an introduction to hyperheuristics, highlighting the role played by computational intelligence techniques in hyperheuristics and the contribution of hyperheuristics to computational intelligence. The tutorial will end with a discussion session on contributions, challenges and future directions of hyperheuristics in computational intelligence.

Tutorial Title: A Gentle Introduction to the Time Complexity Analysis of Evolutionary Algorithms
Presenter: Pietro S. Oliveto The University of Sheffield, UK Related Symposium: FOCI'15 Download Tutorial Here  to understand on which kind of landscapes EAs are efficient, and when they are not
 to develop the first basis of general mathematical techniques needed to perform the analysis.
Abstract: Great advances have been made in recent years towards the runtime com plexity analysis of evolutionary algorithms for combinatorial optimisation problems. Much of this progress has been due to the application of tech niques from the study of randomised algorithms. The first pieces of work, started in the 90s, were directed towards analysing simple toy problems with significant structures. This work had two main goals:
Thanks to this preliminary work, nowadays, it is possible to analyse the runtime of evolutionary algorithms on different combinatorial optimisation problems. In this beginners’ tutorial, we give a basic introduction to the most commonly used techniques, assuming no prior knowledge about time complexity analysis.

Tutorial Title: Human MetaCognition Inspired Algorithms in Neural Networks and Particle Swarm Optimisation
Presenter: Prof Suresh Sundaram Nanyang Technological University, Singapore Related Symposium: General Download Tutorial Here  T. O. Nelson and L. Narens, Metacognition: Core Readings, ch. Metamemory: A theoritical framework and new _ndings, pp. 9 24. Allyn and Bacon: Boston, T. O. nelson (ed.) ed., 1980.
 G. Sateesh Babu, S. Suresh, Sequential projection based metacognitive learning in a Radial basis function network for classification problems, IEEE Trans, on Neural Networks and Learning Systems, 24(2), pp. 194206, 2013.
 MR Tanweer, S Suresh, N Sundararajan, Self regulating particle swarm optimization algorithm, Information Sciences 294, 182202, 2015.
Abstract: This tutorial focuses on the recently developed neural network learning algorithms and particle swarm optimization algorithms inspired from the models of human metacognition. Meta cognition is defined as knowledge about knowledge. The model of human metacognition developed by Nelson and Narens [1] is the simplest model of metacognition available in the literature. This model is characterized by two components: a cognitive component that is the representation of knowledge, and a metacognitive component that enables measured acquisition of knowledge. To enable this measured acquisition, the metacognitive component has a dynamic model of the cognitive component, and continuously monitors and controls the learning ability of the cognitive component. In effect, metacognition involves continuous monitoring of the knowledge represented by the cognition, and using this understanding of knowledge representation to control the learning process actively. The tutorial presents a) Metacognitive Radial Basis Function (McRBF) neural network and its sequential learning algorithm and b) Selfregulating Particle Swarm Optimization (SRPSO) algorithm. McRBF has a cognitive and a metacognitive component. An RBF neural network that is able to represent knowledge is the cognitive component, and a selfregulatory learning mechanism is its metacognitive component. For every sample instance, in the training set, the selfregulatory learning mechanism compares the knowledge represented by the cognitive component and with that of the sample instance. Based on its judgment, it chooses suitable learning strategies, and decides whatto learn, whentolearn and howtolearn in a metacognitive environment. We present the architecture and learning algorithm of McRBF [2]. As the
McRBF selfregulates its own learning process, it has better generalization abilities. The decision making abilities of McRBF are demonstrated using standard benchmark classification problems and also with some real applications in the areas of recognizing human actions and medical informatics problems. SRPSO  Studies in human cognitive psychology have indicated that the best planners regulate their strategies with respect to the current state and their perception of the best experiences of others. Using these principles, SRPSO propose two new learning strategies for the PSO algorithm. The first one uses a selfregulating inertia weight and the second uses the selfperception of the global search direction. The selfregulating inertia weight is employed by the best particle for better exploration and the selfperception of the global search direction is employed by the rest of the particles for intelligent exploitation of the solution space. SRPSO algorithm has been evaluated using the 25 benchmark functions from CEC2005 and a realworld problem for a radar system design. The two proposed learning strategies help SRPSO to achieve faster convergence and provide better solutions to most of the problems.
Background knowledge expected: Interested researchers in the area of neural networks with a basic knowledge of artificial neural networks and search based optimization.
References:

Tutorial Title: Dealing with Uncertainties in Computing: from Probabilistic and Interval Uncertainty to Combination of Different Approaches, with Application to Geoinformatics, Bioinformatics, and Engineering
Presenter: Vladik Kreinovich University of Texas at El Paso, USA Related Symposium: CIES'15 Download Tutorial Here Abstract: Most data processing techniques traditionally used in scientific and engineering practice are statistical. These techniques are based on the assumption that we know the probability distributions of measurement errors etc.
In practice, often, we do not know the distributions, we only know the bound D on the measurement accuracy  hence, after the get the measurement result X, the only information that we have about the actual (unknown) value x of the measured quantity is that x belongs to the interval [XD,X+D]. Techniques for data processing under such interval uncertainty are called interval computations; these techniques have been developed since 1950s.
In many practical problems, we have a combination of different types of uncertainty, where we know the probability distribution for some quantities, intervals for other quantities, and expert information for yet other quantities.
There exist a lot of theoretical research and practical applications in dealing with these types of uncertainty: interval, fuzzy, and combined. However, even for the simplest basic data processing techniques, it is often still necessary to undertake a lot of research to transit from probabilistic to interval and fuzzy uncertainty.
The purpose of this talk is to describe the theoretical background for interval and combined techniques, to describe the existing practical applications, and ideally, to come up with a roadmap for such techniques.
We start with the problem of chip design in computer engineering. In this problem, traditional interval methods lead to estimates with excess width. The reason for this width is that often, in addition to the intervals of possible values of inputs, we also have partial information about the probabilities of different values within these intervals  and standard interval techniques ignore this information.
It is therefore desirable to extend interval techniques to the situations when, in addition to intervals, we also have a partial probabilistic information. In the talk, we give a brief overview of these techniques, and we emphasize the following three application areas: computer engineering, bioinformatics, and geoinformatics.

Tutorial Title: Computational Intelligence in Bioinformatics
Presenter: Ed Keedwell^{*}& Jonathan Mwaura^{+} University of Exeter, UK.^{*} & University of Pretoria, South Africa ^{+} Related Symposium: CICARE, General For many years, biology has been dealing with an avalanche of data created by new faster and more capable measuring devices, particularly in areas such as sequencing, where the price of sequencing a genome has now passed below $1000. Simply storing and processing these datasets which can extend to tera or even petabytes creates its own technological challenges. However, discovering useful, usable, information from this data is an even greater challenge and many computational intelligence techniques have been applied to this task including CI methods for classification, clustering and time series analysis. This tutorial will introduce the common problems tackled in bioinformatics including genomewideassociation studies, gene expression analysis, gene regulatory network construction and protein folding and will describe some of the computational intelligence techniques used to discover new information and to help guide experimentation in the biological sciences.

Tutorial Title: Sensitivity Analysis and Robust Design in Engineering Applications  An Interval Analysis Perspective
Presenter: Paolo Rocca University of Trento, Italy. Related Symposium: General Download Tutorial Here  IA has an intrinsic capability to deal with uncertainties, always present when real devices or systems and experimental measurements are at hand;
 analytical equations and relationships can be easily reformulated and addressed by including intervals of numbers once the fundamentals of IA are known;
 the bounds of a function when evaluated over an interval are determined in a straightforward manner without the need of evaluating the function on all the (infinite) points of the interval;
 IA enables the use of iterative optimization strategies for the robust design and solution of inversion problems, also offering adhoc global optimization techniques able to identify the global optimum with the desired level of accuracy.
Interval Analysis (IA) consists of a set of rules and tools for the analysis and optimization of functions where the variables at hand are intervals of numbers and not single values as in classical arithmetical/optimization problems. For example, an interval of real values (a real interval) can be defined as a onedimensional compact set (a segment) between two extreme points, namely the minimum and maximum interval values. Also complex intervals exist and adhoc rules are defined within IA for the arithmetical operations between them.
Currently, the use of IA has been limited to some pioneering works in Engineering even though it has several attractive features that can overcome some limitations of current stateoftheart approaches and theories. Let us consider the following issues:
The tutorial is aimed to provide an introduction to the fundamentals of Interval Analysis, starting from intuitive explanations to rigorous mathematical and methodological insights. A review of recent applications of IA in the field of sensitivity analysis and robust design and inversion will be illustrated with particular emphasis on antenna arrays and inverse scattering problems.