Internships in Machine Learning and Deep Learning at the “Machine Learning and Optimization Group” at RIST

Google+TwitterFacebook

The following list includes possible topics for master thesis projects which could be developed by internship students in collaboration with researchers from the Machine Learning and Optimization Group, led by Dr. Luigi Malagò, at the Romanian Institute of Science and Technology (RIST). RIST is a private not for profit research institute located in Cluj-Napoca, currently funded by European structural funds. The Machine Learning and Optimization Group is performing research with a specific focus on Machine Learning and Deep Learning methodologies, and all internship projects will have a focus on research activities, connected to the projects developed at the institute. The internships, depending of the outcome of the project, could lead to the submission of research papers to be submitted to scientific workshops and conferences.

All interested students are encouraged to apply by sending an email to deepriemann.internships@rist.ro by including their CV and a motivation letter. Please specify “Internship Application” in the subject. Deadline for applications is the 20th of December 2018.

Informal inquiries can be sent directly to Dr. Luigi Malagò malago@rist.ro

Geometric Methods for Deep Reinforcement Learning

Reinforcement Learning refers to a learning paradigm in Machine Learning where an agent is learning to attain a complex task, by interacting with the environment and collecting feedbacks as a consequence of his actions. The focus of the project will be first on the implementation of state-of-the-art algorithms for Deep Reinforcement Leaning, such as the popular Trust Region Policy Optimization (TRPO https://arxiv.org/pdf/1502.05477.pdf), and next on the design of novel algorithms based on geometric frameworks. These algorithms will be used to solve common benchmark tasks, such as those proposed in https://gym.openai.com .

Hows does a Neural Network work?

The purpose of this project is to investigate the internal flow of information in Deep Neural Networks from an information theoretical perspective, along the lines of research of the information bottleneck theory (https://arxiv.org/pdf/1503.02406.pdf). The project aims at shading lights on the internal mechanisms which take place during the training of a Neural Network, which are at the basis of the high generalization power and success of Deep Learning models, and characterize them using a framework which combines information theory and geometry for probability distributions.

Generative models based on Generative Adversarial Networks (GAN)

Generative Adversarial Networks (GAN https://arxiv.org/abs/1406.2661) are very popular generative models in Deep Learning for computer vision applications. GAN are based on game- theoretic intuitions, where a generator network and a classifying network compete with each other, to learn how to generate new samples similar to those in a given dataset.

The project will be focused on the study and extensions of GAN based on the use of geometric dissimilarity functions, such as for Wasserstein GAN (https://arxiv.org/pdf/1701.07875.pdf).

Derivative-free Methods for Non-convex Optimization

Derivative-free methods and black-box optimization algorithms are of great interest in optimization in presence of non differentiable functions. Among all methods, those based on the optimization of the expected value of the target function, such as CMA-ES ( https://arxiv.org/pdf/1604.00772.pdf), have become very popular in the last few years, since they are able to provide state-of-the-art performances. The project aims at developing novel training algorithms based on neural networks, able to exploit a geometric framework for probability distributions.

Deep Learning for Mobile Devices

In the recent years an increasing number of applications based on deep learning became available on our smart phones. However, there exists a large number of potential applications for mobile devices which at the moment cannot be developed due to the limited computational power of our devices, in absence of dedicated cloud GPU computing services. The project aims at the development of novel methodologies for deep learning which could take advantage of portable dedicated chips, such as the novel Intel Neural Compute Stick (https://software.intel.com/en-us/neural-compute-stick), in connection with techniques such as network compression, quantized neural networks, and training and inference with low-precision.

Geometry of Word Embedding for Text Analysis

Word embeddings have become one of the most popular representations for vocabularies used in text documents. Word2vec (https://arxiv.org/pdf/1310.4546.pdf), one of the most famous techniques for word embedding, is an algorithm based on neural networks which provides vector representations of words learned from their contexts in large text corpora. The projects aims at characterizing the behavior of Word2vec from a geometric perspective and propose novel algorithms for word embedding based on a geometric framework for probability distributions.

Quantum Deep Learning

Neural Quantum States (https://arxiv.org/abs/1606.02318) have been recently proposed in the literature of Quantum Deep Learning for the optimization of wave functions which minimizes some energy operator. Similarly, Quantum Boltzmann Machines (https://link.aps.org/doi/10.1103/PhysRevX.8.021050) are used for the optimization of some function defined over the space of density operators, such as for likelihoods in the quantum case. The project will focus on the design of novel training algorithms for Quantum Deep Learning models based on geometric intuitions coming from quantum information geometry.

Google+TwitterFacebook