Elon Musk famously thinks that cars can be made to drive themselves without relying on expensive laser-ranging lidars. But while Tesla is moving ahead with one fewer sensor than most self-driving car companies, a new startup wants them to add yet another—an infrared camera.
Adasky is developing a far infrared thermal camera called Viper that it says can expand the conditions that automated cars will be able to operate in, and improve safety.
“Today’s sensors are not good enough for fully self-driving cars and that’s where we come in,” says Dror Meiri, vice president of business development at Adasky. “We think infrared (IR) technology can bridge the gap from Level 3 all the way to Levels 4 and 5.”
In this episode, Audrow Nash interviews Chris Gerdes, Professor of Mechanical Engineering at Stanford University, about designing high-performance autonomous vehicles. The idea is to make vehicles safer, as Gerdes says, he wants to “develop vehicles that could avoid any accident that can be avoided within the laws of physics.”
In this interview, Gerdes discusses developing a model for high-performance control of a vehicle; their autonomous race car, an Audi TTS named ‘Shelley,’ and how its autonomous performance compares to ameteur and professional race car drivers; and an autonomous, drifting Delorean named ‘MARTY.’
This is the cheapest good computer vision autonomous car you can makes — less than $85! It uses the fantastic OpenMV camera, with its easy-to-use software and IDE, as well as a low-cost chassis that is fast enough for student use. It can follow lanes of any color, objects, faces and even other cars. This is as close to a self-driving Tesla as you’re going to get for less than $100
It’s perfect for student competitions, where a number of cars can be built and raced against each in an afternoon.
Read the rest of “A “Minimum Viable Racer” for OpenMV” from the source: DIY Robocars.
On-the-fly mapping got the driverless car through a rainy day
Engineering student Manuel Dangel of Swiss Federal Institute of Technology (ETH) in Zurich and teammates were walking the racecourse at Formula Student Driverless in Hockenheimring, Germany, earlier this month when they realized that the computerized wheelbarrow they were using to map the course had gone haywire. [See “ Students Race Driverless Cars in Germany in Formula Student Competition ” 16 August 2017.]
As part of the track-drive event, one of several events that make up the entire competition, the rules permit teams half an hour to walk the racecourse and make measurements they might need to program their driverless cars. Because the track-drive event consists of ten solo laps on the same, unchanging course among traffic cones, “the basic strategy is to run within the map,” Dangel says. If you cannot make a map before the event, though, you have to switch to a more complex strategy.
Karlijn Willems lists seven steps (and 50+ resources) that will help you get started with machine learning.
You may have heard about machine learning from interesting applications like spam filtering, optical character recognition, and computer vision.
Getting started with machine learning is long process that involves going through several resources. There are books for newbies, academic papers, guided exercises, and standalone projects. It’s easy to lose track of what you need to learn among all these options.
So in today’s post, I’ll list seven steps (and 50+ resources) that can help you get started in this exciting field of Computer Science, and ramp up toward becoming a machine learning hero.
Read the rest of “How Machines Learn: A Practical Guide – freeCodeCamp” from the source: freeCodeCamp.
First batch of student-built driverless cars choose safety over speed
More than a dozen teams brought driverless cars to the Formula Student competition last week in Hockenheimring, Germany. It was the first event of its type, but many participants were diligent veterans of Formula Student Electric races and had tested their cars at different types of sites leading up to the main event. “We knew from the electric season that testing is really crucial,” says Manuel Dangel, vice-president of the Formula Student Driverless team at the Swiss Federal Institute of Technology (ETH) in Zurich. Then the rain started falling.
“We thought [our car] would basically fail,” Dangel says. While it had rained on one of their test days, their car’s main way of determining its own ground speed is an optical sensor optimized for dry ground. The team had not managed to complete a full ten-lap track drive in the rain.
I spent more than 20 hours studying and analyzing the best Arduino robot car kits. About Arduino, there is much to say, but the most important thing is why someone would use this board to control a mobile robot.
As you will see in every kit’s description, some of these can be remotely controlled, while others can be programmed to autonomously navigate in the environment. Some kits come with object detection sensors, while others have attached a webcam to capture images in real time. The conclusion is a simple one: working with Arduino in robotics is a process that will never end.
There are a few things that should be considered when choosing an Arduino robot car kit. One of the basic things is the documentation, and the vast majority of these kits includes some kind of manual or assembly instructions. In addition, each of the below kits has different features compared to each other.
Read the rest of “A List of the Best Arduino Robot Car Kits” from the original source: Into Robotics.
Almost all robocars use maps to drive. Not the basic maps you find in your phone navigation app, but more detailed maps that help them understand where they are on the road, and where they should go. These maps will include full details of all lane geometries, positions and meaning of all road signs and traffic signals, and also details like the texture of the road or the 3-D shape of objects around it. They may also include potholes, parking spaces and more.
The maps perform two functions. By holding a representation of the road texture or surrounding 3D objects, they let the car figure out exactly where it is on the map without much use of GPS. A car scans the world around it, and looks in the maps to find a location that matches that scan. GPS and other tools help it not have to search the whole world, making this quick and easy.
Google, for example, uses a 2D map of the texture of the road as seen by LIDAR. (The use of LIDAR means the image is the same night and day.) In this map you see the location of things like curbs and lane markers but also all the defects in those lane markers and the road surface itself. Every crack and repair is visible. Just as you, a human being, will know where you are by recognizing things around you, a robocar does the same thing.
Some providers measure things about the 3D world around them. By noting where poles, signs, trees, curbs, buildings and more are, you can also figure out where you are. Road texture is very accurate but fails if the road is covered with fresh snow. (3D objects also change shape in heavy snow.)
Once you find out where you are (the problem called ‘localization’) you want a map to tell you where the lanes are so you can drive them. That’s a more traditional computer map, though much more detailed than the typical navigation app map.
Read the rest of this article at the source: Robohub.
Shifting to a longer wavelength that’s safer for the eye lets Luminar raise its lidar power enough to stretch its range beyond 200 meters. Other innovations could cut system costs.
Current automotive lidars scan their surroundings by firing pulses from semiconductor diode lasers emitting at 905 nanometers in the near infrared and recording reflected light to build up a point cloud mapping the car’s surroundings. But laser-safety rules in the U.S. and other countries restrict the power in the laser pulse, limiting the lidar’s range to 30 to 40 meters, too short a distance for a car to stop safely at highway speeds. Makers of autonomous cars need to spot low-reflectivity objects at least 200 meters away to give the car enough time to identify hazards and stop, so they turned to other technologies. At least one lidar maker, however, kept tinkering.