I'm a postdoctoral fellow in machine learning at the University of Toronto. I'm particularly interested in deep reinforcement learning. Robotics, dialogue systems, autonomous driving and video games are my favorite applications. I was a member of the team SequeL at INRIA and NADIA team at Orange Labs.
This website has been generated with node.js, express.js and server-side templating handlebars.js using
this data file as content. CSS, HTML and client-side JS from this template.
My bibtex.
See my one page resume
Some of my internet fingerprints (please note the Github repository is basically from my post Msc era
while Bitbucket is from my pre Msc era):
Deep Reinforcement Learning for Traffic Control.
Reinforcement learning for dialog systems optimization with user adaptation. http://ncarrara.fr/others/thesis-nicolas-carrara.pdf
Master thesis : automated planning under uncertainty with multiple objectives.
Improvements of a new dynamic neural field model: Randomly Spiking Dynamic Neural Fields (RSDNF).
Predicting user behavior on the web.
Adding a model for dynamic neural field. Basic image processing.
Helping students during lab sessions of RLSS 2019.
Teaching python, regular expressions, nodejs etc.
Lectures and lab sessions of reinforcement learning for computer science MOCAD master.
Lab sessions of html/css/php/javascript to SIAD master students.
Tutor for the computer science part of the 1st year of university.