How a small band of Silicon Valley engineers started a global robotics revolution
Ten years ago today, an engineer at Silicon Valley robotics lab Willow Garage published a new code repository on SourceForge. The repository, made publicly available to anyone in the world who wanted to access it, hosted the codebase for a new project Willow was working on: ROS.
The ROS code repo, set up by Ken Conley, ROS platform manager at Willow, on November 7, 2007 at 4:07:42 PT, was the first time the term ROS was used as a formal, public designation for Willow’s Robot Operating System project.
Read the rest of “Wizards of ROS: Willow Garage and the Making of the Robot Operating System” from the source: IEEE Spectrum Robotics.
Robot Operating System (ROS) is a mature and flexible framework for robotics programming. ROS provides the required tools to easily access sensors data, process that data, and generate an appropriate response for the motors and other actuators of the robot. The whole ROS system has been designed to be fully distributed in terms of computation, so different computers can take part in the control processes, and act together as a single entity (the robot).
Autonomous driving is an exciting subject with demand for experienced engineers increasing year after year. ROS is one of the best options to quickly jump into the subject. So learning ROS for self-driving vehicles is becoming an important skill for engineers. We have presented here a full path to learn ROS for autonomous vehicles while keeping the budget low. Now it is your turn to make the effort and learn. Money is not an excuse anymore. Go for it!
Read the rest of “How to start with self-driving cars using ROS” from the source: Robohub.
Elon Musk famously thinks that cars can be made to drive themselves without relying on expensive laser-ranging lidars. But while Tesla is moving ahead with one fewer sensor than most self-driving car companies, a new startup wants them to add yet another—an infrared camera.
Adasky is developing a far infrared thermal camera called Viper that it says can expand the conditions that automated cars will be able to operate in, and improve safety.
“Today’s sensors are not good enough for fully self-driving cars and that’s where we come in,” says Dror Meiri, vice president of business development at Adasky. “We think infrared (IR) technology can bridge the gap from Level 3 all the way to Levels 4 and 5.”
In this episode, Audrow Nash interviews Chris Gerdes, Professor of Mechanical Engineering at Stanford University, about designing high-performance autonomous vehicles. The idea is to make vehicles safer, as Gerdes says, he wants to “develop vehicles that could avoid any accident that can be avoided within the laws of physics.”
In this interview, Gerdes discusses developing a model for high-performance control of a vehicle; their autonomous race car, an Audi TTS named ‘Shelley,’ and how its autonomous performance compares to ameteur and professional race car drivers; and an autonomous, drifting Delorean named ‘MARTY.’
This is the cheapest good computer vision autonomous car you can makes — less than $85! It uses the fantastic OpenMV camera, with its easy-to-use software and IDE, as well as a low-cost chassis that is fast enough for student use. It can follow lanes of any color, objects, faces and even other cars. This is as close to a self-driving Tesla as you’re going to get for less than $100
It’s perfect for student competitions, where a number of cars can be built and raced against each in an afternoon.
Read the rest of “A “Minimum Viable Racer” for OpenMV” from the source: DIY Robocars.
On-the-fly mapping got the driverless car through a rainy day
Engineering student Manuel Dangel of Swiss Federal Institute of Technology (ETH) in Zurich and teammates were walking the racecourse at Formula Student Driverless in Hockenheimring, Germany, earlier this month when they realized that the computerized wheelbarrow they were using to map the course had gone haywire. [See “ Students Race Driverless Cars in Germany in Formula Student Competition ” 16 August 2017.]
As part of the track-drive event, one of several events that make up the entire competition, the rules permit teams half an hour to walk the racecourse and make measurements they might need to program their driverless cars. Because the track-drive event consists of ten solo laps on the same, unchanging course among traffic cones, “the basic strategy is to run within the map,” Dangel says. If you cannot make a map before the event, though, you have to switch to a more complex strategy.
Karlijn Willems lists seven steps (and 50+ resources) that will help you get started with machine learning.
You may have heard about machine learning from interesting applications like spam filtering, optical character recognition, and computer vision.
Getting started with machine learning is long process that involves going through several resources. There are books for newbies, academic papers, guided exercises, and standalone projects. It’s easy to lose track of what you need to learn among all these options.
So in today’s post, I’ll list seven steps (and 50+ resources) that can help you get started in this exciting field of Computer Science, and ramp up toward becoming a machine learning hero.
Read the rest of “How Machines Learn: A Practical Guide – freeCodeCamp” from the source: freeCodeCamp.
First batch of student-built driverless cars choose safety over speed
More than a dozen teams brought driverless cars to the Formula Student competition last week in Hockenheimring, Germany. It was the first event of its type, but many participants were diligent veterans of Formula Student Electric races and had tested their cars at different types of sites leading up to the main event. “We knew from the electric season that testing is really crucial,” says Manuel Dangel, vice-president of the Formula Student Driverless team at the Swiss Federal Institute of Technology (ETH) in Zurich. Then the rain started falling.
“We thought [our car] would basically fail,” Dangel says. While it had rained on one of their test days, their car’s main way of determining its own ground speed is an optical sensor optimized for dry ground. The team had not managed to complete a full ten-lap track drive in the rain.
I spent more than 20 hours studying and analyzing the best Arduino robot car kits. About Arduino, there is much to say, but the most important thing is why someone would use this board to control a mobile robot.
As you will see in every kit’s description, some of these can be remotely controlled, while others can be programmed to autonomously navigate in the environment. Some kits come with object detection sensors, while others have attached a webcam to capture images in real time. The conclusion is a simple one: working with Arduino in robotics is a process that will never end.
There are a few things that should be considered when choosing an Arduino robot car kit. One of the basic things is the documentation, and the vast majority of these kits includes some kind of manual or assembly instructions. In addition, each of the below kits has different features compared to each other.
Read the rest of “A List of the Best Arduino Robot Car Kits” from the original source: Into Robotics.