I have completed Term 1 of the autonomous driving engineer course provided by the online course Udacity, so I would like to write my impressions.
Udacity is one of the online course MOOCs such as Coursera and edX, and there are various courses such as automatic driving engineer course, AI course, and full stack engineer course. The difference from other MOOCs is that Coursera and others are rather knowledge-based, while Udacity is project-based. Lectures on autonomous driving courses are also provided by Mercedes-Benz and others, so you can learn the latest technology.
This is a program that Udacity started around November 2016, and it is a course that allows you to acquire the skills necessary to become an autonomous driving engineer in 9 months. It is divided into 3 terms from Term1 to Term3, and each term is 3 months each. In 3 months of Term1, you will learn computer vision and deep learning, and complete 5 projects. The language used in Term1 is Python, and from Term2 C ++ is mainly used. As of 2018, it seems that there is no program other than this course that allows you to comprehensively study technologies related to autonomous driving. In this course, you can study from image recognition to self-position estimation, control, route planning, and actual vehicle driving.
(Recently confirmed, it seems that it is now a 2-term system instead of a 3-term system. The removed contents are related to model predictive control in the control field, segmentation of road areas by deep learning, and lane assistant system. Functional safety was lost.)
I will briefly explain the outline of the five projects. More details will be explained.
This project is an introduction to computer vision and recognizes lanes using cany edge detection methods. I had never used computer vision, but I was able to proceed relatively smoothly.
Main libraries used: OpenCV
It is a project to identify road signs using TensorFlow. This is an introduction to Deep Learning. You can learn about CNN (Convolutional Neural Network).
Main libraries used: TensorFlow
3.Behavioral Cloning The task of running a course without leaving the car on the simulator using deep learning. Save the image and steering angle when a person drives a car on the simulator. Based on the saved image input, it learns using CNN (Convolutional Neural Network) to output the steering wheel angle, and it is automatically operated on the simulator. Click here for the video (https://www.youtube.com/watch?v=HRvNs1jUHmI&feature=youtu.be).
Whereas Project 1 was the basic method, this is an applied content. Learn how to recognize lanes regardless of the brightness of curves and roads. The video is here
The task of vehicle detection using machine learning. A learner that recognizes a vehicle is generated from features such as Histograms of Oriented Gradients and color space, and the learner is used to determine whether or not there is a vehicle in the image. Is tracked. The video is here
Main libraries used: OpenCV, Scikit-Learn
This video is a combination of 4 and 5.
The above are the five tasks included in Term 1, and the contents range from computer vision to machine learning to deep learning. It was quite difficult to complete these contents in 3 months, and I spent at least about 15 hours a week. Also, each Term costs $ 800, so the total cost is $ 2,400. If it's worth the time and money, I think it's well worth it. I have hardly dealt with image recognition so far, but in three months I became able to perform vehicle recognition and lane detection, and my knowledge of deep learning deepened. And if you can work as an autonomous driving engineer, $ 2,400 would be well worth the investment.
Currently, I am in the middle of Term2, and I would like to write my impressions about that as well.
Free lectures on autonomous driving include:
Deep Learning for Self-Driving Cars This is a method using Deep Learning and Deep Reinforcement Learning in MIT lectures, 5 times in total. It is a lecture from the basics of deep learning and machine learning.
Introduction to Mobile Robotics - SS 2016 Lecture at the University of Freiburg in Germany. The lecture itself is in English and does not require German, but it cannot be subtitled. Most of the content is related to sensor information installed in autonomous vehicles. For example, we are dealing with the Kalman filter used when integrating sensor information (sensor fusion). There is also a detailed explanation of SLAM (a method of simultaneously estimating the self-position and creating an environmental map) and Path Planning (route planning).
<a target="_blank" href="https://www.amazon.co.jp/gp/product/4839952981/ref=as_li_tl?ie=UTF8&camp=247&creative=1211&creativeASIN=4839952981&linkCode=as2&tag=ryo519-22&linkId=270304ba4e939b62254841b1b1 > Probability Robotics (Premium Books Version) It is close to the content handled.
iLectureonline This is a lecture about the Kalman filter used for sensor fusion. Personally, I think it is the most detailed and easy to understand in the lecture on Kalman filter.
Recommended Posts