Alexey Skrynnik
PhD, Research Scientist at AIRI
Moscow, Russia
As a Research Scientist with a PhD in Computer Science, my expertise centers on Artificial Intelligence and Machine Learning, particularly in the realms of applied Reinforcement Learning (RL) and Multi-Agent Systems. My work includes developing advanced RL algorithms and exploring the synergy between Planning and Learning. Notably, I’ve developed several state-of-the-art methods for decentralized multi-agent pathfinding, including Follower, MATS-LP, Switcher, and the POGEMA environment for evaluating these methods.
My contributions to hierarchical RL, particularly within embodied environments such as Minecraft, were highlighted by the ForgER approach that secured first place in the NeurIPS 2019 MineRL Diamond competition.
Furthermore, I’ve been leading efforts to combine Natural Language Processing (NLP) with RL to improve language-driven task solving, highlighted by my role in directing the RL track of the IGLU competition at NeurIPS 2021/2022.
News
Jan 18, 2024 | I’m thrilled to announce that two of my submissions to AAAI 2024 have been accepted. The paper titled “Learning to Follow: Lifelong Multi-agent Pathfinding with Decentralized Replanning” will be presented as an oral presentation, and the paper “Decentralized Monte Carlo Tree Search for Partially Observable Multi-agent Pathfinding” will be showcased in a poster session. |
---|
Selected publications
- Learn to follow: Decentralized lifelong multi-agent pathfinding via planning and learningIn Proceedings of the AAAI Conference on Artificial Intelligence , 2024
- Decentralized Monte Carlo Tree Search for Partially Observable Multi-Agent PathfindingIn Proceedings of the AAAI Conference on Artificial Intelligence , 2024
- When to Switch: Planning and Learning for Partially Observable Multi-Agent PathfindingIEEE Transactions on Neural Networks and Learning Systems, 2023
- Interactive Grounded Language Understanding in a Collaborative Environment: Retrospective on Iglu 2022 CompetitionIn NeurIPS 2022 Competition Track , 2023
- Pathfinding in stochastic environments: learning vs planningPeerJ Computer Science, 2022
- Interactive grounded language understanding in a collaborative environment: Iglu 2021In NeurIPS 2021 Competitions and Demonstrations Track , 2022
- Hybrid policy learning for multi-agent pathfindingIEEE Access, 2021
- Forgetful experience replay in hierarchical reinforcement learning from expert demonstrationsKnowledge-Based Systems, 2021
- Hierarchical deep q-network from imperfect demonstrations in minecraftCognitive Systems Research, 2021