On-the-fly mapping got the driverless car through a rainy day
Engineering student Manuel Dangel of Swiss Federal Institute of Technology (ETH) in
Zurich and teammates were walking the racecourse at Formula Student Driverless in Hockenheimring, Germany, earlier this month when they realized that the computerized wheelbarrow they were using to map the course had gone haywire. [See “
Students Race Driverless Cars in Germany in Formula Student Competition
” 16 August 2017.]
As part of the track-drive event, one of several events that make up the entire competition, the rules permit teams half an hour to walk the racecourse and make measurements they might need to program their driverless cars. Because the track-drive event consists of ten solo laps on the same, unchanging course among traffic cones, “the basic strategy is to run within the map,” Dangel says. If you cannot make a map before the event, though, you have to switch to a more complex strategy.
Read the rest of “The Tech That Won the First Formula Student Driverless Race” from the source: IEEE Spectrum Cars That Think.
Karlijn Willems lists seven steps (and 50+ resources) that will help you get started with machine learning.
You may have heard about machine learning from interesting applications like spam filtering, optical character recognition, and computer vision.
Getting started with machine learning is long process that involves going through several resources. There are books for newbies, academic papers, guided exercises, and standalone projects. It’s easy to lose track of what you need to learn among all these options.
So in today’s post, I’ll list seven steps (and 50+ resources) that can help you get started in this exciting field of Computer Science, and ramp up toward becoming a machine learning hero.
Read the rest of “How Machines Learn: A Practical Guide – freeCodeCamp” from the source: freeCodeCamp.
First batch of student-built driverless cars choose safety over speed
More than a dozen teams brought driverless cars to the Formula Student competition last week in Hockenheimring, Germany. It was the first event of its type, but many participants were diligent veterans of Formula Student Electric races and had tested their cars at different types of sites leading up to the main event. “We knew from the electric season that testing is really crucial,” says Manuel Dangel, vice-president of the Formula Student Driverless team at the Swiss Federal Institute of Technology (ETH) in Zurich. Then the rain started falling.
“We thought [our car] would basically fail,” Dangel says. While it had rained on one of their test days, their car’s main way of determining its own ground speed is an optical sensor optimized for dry ground. The team had not managed to complete a full ten-lap track drive in the rain.
Read the rest of “Students Race Driverless Cars in Germany in Formula Student Competition” from the source: IEEE Spectrum Cars That Think.
I spent more than 20 hours studying and analyzing the best Arduino robot car kits. About Arduino, there is much to say, but the most important thing is why someone would use this board to control a mobile robot.
As you will see in every kit’s description, some of these can be remotely controlled, while others can be programmed to autonomously navigate in the environment. Some kits come with object detection sensors, while others have attached a webcam to capture images in real time. The conclusion is a simple one: working with Arduino in robotics is a process that will never end.
There are a few things that should be considered when choosing an Arduino robot car kit. One of the basic things is the documentation, and the vast majority of these kits includes some kind of manual or assembly instructions. In addition, each of the below kits has different features compared to each other.
Read the rest of “A List of the Best Arduino Robot Car Kits” from the original source: Into Robotics.
Almost all robocars use maps to drive. Not the basic maps you find in your phone navigation app, but more detailed maps that help them understand where they are on the road, and where they should go. These maps will include full details of all lane geometries, positions and meaning of all road signs and traffic signals, and also details like the texture of the road or the 3-D shape of objects around it. They may also include potholes, parking spaces and more.
The maps perform two functions. By holding a representation of the road texture or surrounding 3D objects, they let the car figure out exactly where it is on the map without much use of GPS. A car scans the world around it, and looks in the maps to find a location that matches that scan. GPS and other tools help it not have to search the whole world, making this quick and easy.
Google, for example, uses a 2D map of the texture of the road as seen by LIDAR. (The use of LIDAR means the image is the same night and day.) In this map you see the location of things like curbs and lane markers but also all the defects in those lane markers and the road surface itself. Every crack and repair is visible. Just as you, a human being, will know where you are by recognizing things around you, a robocar does the same thing.
Some providers measure things about the 3D world around them. By noting where poles, signs, trees, curbs, buildings and more are, you can also figure out where you are. Road texture is very accurate but fails if the road is covered with fresh snow. (3D objects also change shape in heavy snow.)
Once you find out where you are (the problem called ‘localization’) you want a map to tell you where the lanes are so you can drive them. That’s a more traditional computer map, though much more detailed than the typical navigation app map.
Read the rest of this article at the source: Robohub.