I was thinking of shortlisting potential supervisors for my Ph.D., with my interest in Autonomous Driving, I thought it would be easier to find the people working in this domain through the datasets being used. Here are some of the datasets which I found.
Released by Audi, the Audi Autonomous Driving Dataset (A2D2) was released to support startups and academic researchers working on autonomous driving. The dataset includes over 41,000 labeled with 38 features. Around 2.3 TB in total, A2D2 is split by annotation type (i.e. semantic segmentation, 3D bounding box). In addition to labelled data, A2D2 provides unlabelled sensor data (~390,000 frames) for sequences with several loops.
Part of the Apollo project for autonomous driving, ApolloScape is an evolving research project that aims to foster innovation across all aspects of autonomous driving, from perception to navigation and control. Via their website, users can explore a variety of simulation tools and over 100K street view frames, 80k lidar point cloud and 1000km trajectories for urban traffic.
Argoverse is made up of two datasets designed to support autonomous vehicle machine learning tasks such as 3D tracking and motion forecasting. Collected by a fleet of autonomous vehicles in Pittsburgh and Miami, the dataset includes 3D tracking annotations for 113 scenes and over 324,000 unique vehicle trajectories for motion forecasting. Unlike most other open source autonomous driving datasets, Argoverse is the only modern AV dataset that provides forward-facing stereo imagery.
Also known as BDD 100K, the DeepDrive dataset gives users access to 100,000 annotated videos and 10 tasks to evaluate image recognition algorithms for autonomous driving. The dataset represents more than 1000 hours of driving experience with more than 100 million frames, as well as information on geographic, environmental, and weather diversity.
CityScapes is a large-scale dataset focused on the semantic understanding of urban street scenes in 50 German cities. It features semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories. The entire dataset includes 5,000 annotated images with fine annotations, and an additional 20,000 annotated images with coarse annotations.
This dataset includes 33 hours of commute time recorded on highway 280 in California. Each 1-minute scene was captured on a 20km section of highway driving between San Jose and San Francisco. The data was collected using comma EONs, which features a road-facing camera, phone GPS, thermometers and a 9-axis IMU.
Published by Google in 2018, the Landmarks dataset is divided into two sets of images to evaluate recognition and retrieval of human-made and natural landmarks. The original dataset contains over 2 million images depicting 30 thousand unique landmarks from across the world. In 2019, Google published Landmarks-v2, an even larger dataset with 5 million images and 200k landmarks.
First released in 2012 by Geiger et al, the KITTI dataset was released with the intent of advancing autonomous driving research with a novel set of real-world computer vision benchmarks. One of the first ever autonomous driving datasets, KITTI boasts over 4000 academic citations and counting.
KITTI provides 2D, 3D, and bird’s eye view object detection datasets, 2D object and multi-object tracking and segmentation datasets, road/lane evaluation detection datasets, both pixel and instance-level semantic datasets, as well as raw datasets.
Launched in 2021, Leddar PixSet is a new, publicly available dataset for autonomous driving research and development that contains data from a full AV sensor suite (cameras, LiDARs, radar, IMU), and includes full-waveform data from the Leddar Pixell, a 3D solid-state flash LiDAR sensor. The dataset contains 29k frames in 97 sequences, with more than 1.3M 3D boxes annotated.
Published by popular rideshare app Lyft, the Level5 dataset is another great source for autonomous driving data. It includes over 55,000 human-labeled 3D annotated frames, surface map, and an underlying HD spatial semantic map that is captured by 7 cameras and up to 3 LiDAR sensors that can be used to contextualize the data.
Developed by Motional, the nuScenes dataset is one of the largest open-source datasets for autonomous driving. Recorded in Boston and Singapore using a full sensor suite (32-beam LiDAR, 6 360° cameras and radars), the dataset contains over 1.44 million camera images capturing a diverse range of traffic situations, driving maneuvers, and unexpected behaviors.
The Oxford RobotCar Dataset contains over 100 recordings of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different environmental conditions, including weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.
PandaSet was the first open-source AV dataset available for both academic and commercial use. It contains 48,000 camera images, 16,000 LiDAR sweeps, 28 annotation classes, and 37 semantic segmentation labels taken from a full sensor suite.
Online education platform Udacity has open sourced access to a variety of projects for autonomous driving, including neural networks trained to predict steering angles of the car, camera mounts, and dozens of hours of real driving data.
The Waymo Open dataset is an open-source multimodal sensor dataset for autonomous driving. Extracted from Waymo self-driving vehicles, the data covers a wide variety of driving scenarios and environments. It contains 1000 types of different segments where each segment captures 20 seconds of continuous driving, corresponding to 200,000 frames at 10 Hz per sensor.
The Astyx Dataset HiRes2019 is a popular automotive radar dataset for deep learning-based 3D object detection. The motive behind open-sourcing this dataset is to provide high-resolution radar data to the research community, facilitating and stimulating research on algorithms using radar sensor data. The dataset is a radar-centric automotive dataset based on radar, lidar and camera data for 3D object detection. The size of the dataset is more than 350 MB, and it consists of 546 frames.
Open Images V5 is a dataset consisting of more than nine million images annotated with labels spanning thousands of object categories. The Open Images V5 dataset features segmentation masks for 2.8 million object instances in 350 groups. The dataset includes 2.68M segmentation masks on the training set, 36.5M image-level labels with over 20k categories as well as 99k masks on the validation and test sets.
The Boxy Dataset by Bosch is a large vehicle detection dataset with almost two million annotated vehicles for training and evaluating object detection methods for self-driving cars on freeways. It has 200,000 images, 1,990,000 annotated vehicles in 5 megapixel resolution. The data also includes sunshine, rain, dusk, and night, and clear freeways, heavy traffic, traffic jams.
They present a challenging multi-agent seasonal dataset collected by a fleet of Ford autonomous vehicles at different days and times during 2017–18. The vehicles were manually driven on a route in Michigan that included a mix of driving scenarios including the Detroit Airport, freeways, city-centers, university campus and suburban neighborhood.
They present the seasonal variation in weather, lighting, construction and traffic conditions experienced in dynamic urban environments. This dataset can help design robust algorithms for autonomous vehicles and multi-agent systems. Each log in the dataset is time-stamped and contains raw data from all the sensors, calibration values, pose trajectory, ground truth pose, and 3D maps. All data is available in Rosbag format that can be visualized, modified and applied using the open source Robot Operating System (ROS).
The CADC dataset aims to promote research to improve self-driving in adverse weather conditions. This is the first public dataset to focus on real world driving data in snowy weather conditions.
For this dataset, routes were chosen with various levels of traffic, a variety of vehicles and always with snowfall.
Sequences were selected from data collected within the Region of Waterloo, Canada.
This dataset was collected with a robocar (in human driving mode of course), equipped with eleven heterogeneous sensors, in the downtown (for long-term data) and suburban (for roundabout data) areas of Montbéliard in France. The vehicle speed was limited to 50 km/h following the French traffic rules. For the long-term data, the driving distance is about 5.0 km (containing a small and a big road loop for loop-closure purpose) and the length of recorded data is about 16 minutes for each collection round. For the roundabout data, the driving distance is about 4.2 km (containing 10 roundabouts with various sizes) and the length of recorded data is about 12 minutes for each collection round. In addition to enjoying the typical scenery of eastern France, users can also feel the daily and seasonal changes in the city.
In recent years, High Definition (HD) maps have become the core dependency for various state-of-the-art autonomous driving stacks; however, these often require centimeter-level annotations defined manually in order for an autonomous vehicle to be able to identify feasible paths. To motivate research directions using light-weight and fully automated map representations, they present NominalScenes.
At AVL, they leverage road furniture for automatic camera calibration. This dataset contains over 3,100 stop signs that have been used to develop their automatic intrinsic calibration pipeline.
The dataset was recorded by placing a HD camera in a car driving around the Surrey countryside. The dataset contains about 30 minutes of driving. The video is 1920x1080 in colour, encoded using H.264 codec. Steering is estimated by tracking markers on the steering wheel. The car’s speed is estimated from OCR the car’s speedometer (but the accuracy of the method is not guaranteed).
It was recorded by Autoliv Inc. (www.autoliv.com). The video contains the driver’s view from the windscreen for approximately 3 hours of driving, in the vicinity of Stokholm. The video is of size 900x244 and greyscale. All frames are labelled according to driving context and driver’s actions.
It has been recorded by Liam Ellis and Nicolas Pugeault at the university of Linkoping, using a remote controlled car with a rotating camera, driving around two tracks: O-shape and P-shape. For each track, the data consists of a number of runs one way or the other around the track.
Some more datasets are available here:
Open Datasets - Scale
Trusted by world class companies, Scale delivers high quality training data for AI applications such as self-driving…
I am currently working on an experiment as well as waiting for a conference paper submission result and hope to write about it soon.
Best Open-Source Autonomous Driving Datasets - SiaSearch
In recent years, more and more companies and research institutions have made their autonomous driving datasets open to…
Top 10 Popular Datasets For Autonomous Driving Projects
Since a few years, organisations have been investing heavily in the autonomous driving space. The reason behind this…
Top 12 Popular Autonomous Driving Datasets That Can Get You Started Immediately
It's 2021, and we're already gearing up for the new era of full self-driving cars. As automotive companies are…
Canadian Adverse Driving Conditions Dataset
To achieve a high quality multi-sensor dataset, it is essential to calibrate the extrinsics and intrinsics of every…
EU Long-term Dataset with Multiple Sensors for Autonomous Driving
⚠ 2020-12-19: The data download links are broken due to some unknown cloud server issues, we will check it as soon as…
Datasets - Autonomous Vehicle Laboratory
With over 13 hours of highly dynamic urban scenarios from autonomous agents and expert drivers, our team is working on…