CVGL Camera Calibration Dataset
A necessity for my thesis
I am an MS student at FAST-NUCES for the last 3.5 years and an MS-Thesis student at Computer Vision and Graphics Lab (CVGL), LUMS for 3 years and that’s the reason for the title.
So, I had my final exam on the 20th. I have my final thesis presentation on Monday. I have been working on Camera Calibration for the last 2 years approximately. It all started with a script that I had used for Tsinghua-Daimler Dataset while working on Cyclist Detection. What the script did was as follows:
Transform image coordinates to the camera and then to the world coordinates.
I wasn’t able to propose a solution for the detection of small objects. So, I decided to work on Camera Calibration and the idea was as follows:
Incorporate mathematical equations in a CNN to predict Calibration parameters.
I have previously written about my Thesis Idea here:
Now, in order to train a CNN, we required a dataset with all the required parameters which were 13 in our case. I looked into the literature and found out that no dataset fulfilling our requirements was available. We decided to collect our own dataset with the main focus of having multiple camera configurations. We used the CARLA simulator to collect the dataset. Specifically, the following data collector:
We had 2 towns available and we decided to have 25 configurations from each town for our experiments. I had collected 50 configurations but an episode was removed by mistake from a town and so town 1 has 25 configurations while town 2 has 24 configurations.
For each episode, the following was used to generate the required values to be fed into the data generator:
An example of CARLA simulator settings for an episode is as follows:
The dataset format used for experiments can be accessed here:
The code can be accessed here:
The paper can be accessed here.
I think that’s it for now. I believe that I will soon be starting my Ph.D. as I have an interview on Tuesday. Let’s see, what happens.