VISION & LEARNING FOR AUTONOMOUS AI LAB


Welcome to the Vision & Learning for Autonomous AI (VL4AI) Lab.

Our research addresses the theoretical foundations and practical applications of computer vision and machine learning for an embodied AI to perceive, predict and interact with the dynamic environment around it. Our interest lies in discovering and proposing the fundamental principles, algorithms and practical implementations for solving high-level visual perception such as:

  • Object and scene understanding and reconstruction,
  • Predication and reasoning about human motion, activity and behaviour in the presence of physical and social interactions with the environment and the other people, respectively.

Our overarching aim is to develop an end-to-end perception system for an embodied agent to learn, perceive and act simoultaneously through interaction with the dynamic world.

Some of our seminal works:

  • Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression, CVPR 2019
  • SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints, CVPR 2019
  • Social-BiGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks, NeurIPS 2019
  • Online multi-target tracking using recurrent neural networks, AAAI 2017
  • Joint probabilistic data association revisited, ICCV 2015

Check our publications for a complete list.



Recent News


October, 2021


One paper is accpeted in TPAMI 2022.

The deep set prediction paper is accepted in TPAMI 2022. Congratulation to Alan (Tianyu), Farbod and Roman. Read the paper here.



July, 2021


Our ODAM paper is accpeted in ICCV21 as oral presentation

Congratulation to Kejie. Read the paper here.



July, 2021


Our TRiPOD paper is accpeted in ICCV21

Congratulation to Vida and Mahsa. Read the paper here.



July, 2021


1 paper is accpeted in IEEE TSP

Congratulation to Hoa. Read the paper here.



July, 2021


We have 1 paper accepted in RA-Letter & IROS 2021

Congratulation to Ehsan and Ali. Read the paper here.