I make neural networks do stuff.
I am machine learning specialist at Gauss Algorithmic, and a grad student at Masaryk University. I am interested in everything AI from advanced techniques and state-of-the-art to underlying math.
I work as a researcher and ML engineer at Gauss Algorithmic. My domains are computer vision and natural language processing. I mostly experiment and apply state-of-the-art methods in deep learning, but working here also taught me the importance of good software engineering practices like reproducibility and containerization.
Teaching helped me learn to communicate effectively and talk in front of an audience. My courses are:
All take place at Masaryk University in Brno.
I've worked part time for several years at Fenomen Multimedia as a mobile and web frontend developer (Kotlin/Java, React) during my high school. I developed two publicly available apps, one currently having over 10,000 users. Because it was a small company, I gained valuable experinence of seeing development from a customer-centric perspective.
I volunteered as an active organizer in a local GDG group and co-organized dozens of talks and workshops including developer festival DevFest 2015.
I study the Machine learning and artificial intelligence master's program at Faculty of informatics, Masaryk University.
I have finished AI & Natural Language Processing bachelor's program at Faculty of informatics, Masaryk University. My primary interests were machine learning, deep learning and reinforcement learning. Currently, I focus on natural language processing and computer vision.
My thesis was called Risk-Sensitive Reinforcement Learning and has received the Dean's Award for Outstanding Final Thesis.
In many scenarios, we need guarantees that an RL agent will avoid certain risks, such as endangering people or causing a crash. In my thesis, I have summarized possible ways of quantifying risk and created an extensive overview of RL methods that take risk into consideration. Full text can be found here.
I participated in a student exchange program at Johannes Kepler University in Linz. My focus during this stay was deep learning, reinforcement learning, natural language processing and probabilistic models.
During my high school, I understood the importance of being proficient in English for further studies in computer science, so I've passed a Cambridge certification for the level C1.
I attended programming courses by Johns Hopkins University during my high school to improve my coding skills. I was enthusiastic about the opportunity and finished with top grades both of them.
Here are the certificates: Introductory and Advanced.
While I am generally interested in various neural architectures such as GNNs or transformers, I believe that a majority of the progress in the field will come from finding better forms of training.
There are some clever training techniques, such as reinforcement learning, self-training, adversarial training, self-play, contrastive training, etc, but I still think that there are still undiscovered gems and we should search for new ways to exploit "self-supervision."
I find RL especially interesting part of ML. In fact, I wrote my bachelor's thesis in the field, for which I received the Dean's award. It is a theoretical work and I would like to focus on practical applications in the future.
In my opinion, there are two views on reinforcement learning. The first view is that the goal is to solve a strategic problem and neural net is a mean towards it. The second is that training NN is the goal and formulating the problem as a reinforcement problem is the means. I think that the first view is the common view in the community, but the second view will have greater impact, at least in the foreseeable future. InstructGPT is a good example of this.
Large neural models have showed promising results in understanding human text. I think we are still at the beginning and we are going to see more progress in the future.
I believe there are several limiting factors at this moment that need to be addressed (and we are getting there). First, standard training objectives such as supervised categorical cross entropy on human text are too naive. Reinforcement learning and adversarial nets might help. Second, We need to find ways of scaling models without increasing the compute and memory requirements during inference. Sparse models are a good direction. Third, we need to make language models grounded in the real world, not just text on the internet. Pretraining multimodal networks that can process text, images, audio (code, etc.) is probably a good starting point.
While there are many successful applications of deep learning in computer vision, I still think there is a room for improvements.
For example, we still have limited control over generative models. And object-detectors have trouble detecting larger numbers of small overlapping objects. But I think that the most important direction in computer vision is expansion of the input domain - we need good pretrained models for 3D data (point clouds), sound, infrared cameras, and other modalities.
I like running because because I can listen to audiobooks or just clear my mind.
I find our collaboration with Marek very benefitial and pleasant at professional and human level. Marek is helpful, proactive and honest. He always thinks in the context of the project he's currently involed in and that's why he brings a great value to our team.
Marek Kadlčík fitted into our team quickly. His attitude towards work is enterprising and creative with a healthy sense of humor. Cooperation with him is both pleasant and productive.
You put the self-paced nature of the course to a good use, running through the work smoothly with virtually no hitches in an excellent time frame.
It was a real pleasure for me to guide you through this course and I wish you a lot of success and happiness in the pursuit of your future professional goals. I hope you will keep your curiosity, playfulness and professionalism.