Robokar News

Students Race Driverless Cars in Germany in Formula Student Competition

First batch of student-built driverless cars choose safety over speed

Photo: Formula Student Germany

More than a dozen teams brought driverless cars to the Formula Student competition last week in Hockenheimring, Germany. It was the first event of its type, but many participants were diligent veterans of Formula Student Electric races and had tested their cars at different types of sites leading up to the main event. “We knew from the electric season that testing is really crucial,” says Manuel Dangel, vice-president of the Formula Student Driverless team at the Swiss Federal Institute of Technology (ETH) in Zurich. Then the rain started falling.

“We thought [our car] would basically fail,” Dangel says. While it had rained on one of their test days, their car’s main way of determining its own ground speed is an optical sensor optimized for dry ground. The team had not managed to complete a full ten-lap track drive in the rain.

———-

Read the rest of “Students Race Driverless Cars in Germany in Formula Student Competition” from the source: IEEE Spectrum Cars That Think.

A List of the Best Arduino Robot Car Kits

4WD Remote Control Robot Kit

I spent more than 20 hours studying and analyzing the best Arduino robot car kits. About Arduino, there is much to say, but the most important thing is why someone would use this board to control a mobile robot.

As you will see in every kit’s description, some of these can be remotely controlled, while others can be programmed to autonomously navigate in the environment. Some kits come with object detection sensors, while others have attached a webcam to capture images in real time. The conclusion is a simple one: working with Arduino in robotics is a process that will never end.

There are a few things that should be considered when choosing an Arduino robot car kit. One of the basic things is the documentation, and the vast majority of these kits includes some kind of manual or assembly instructions. In addition, each of the below kits has different features compared to each other.

———-

Read the rest of “A List of the Best Arduino Robot Car Kits” from the original source: Into Robotics.

Many different approaches to Robocar Mapping

Many different approaches to Robocar Mapping: “

Source: here.com

Almost all robocars use maps to drive. Not the basic maps you find in your phone navigation app, but more detailed maps that help them understand where they are on the road, and where they should go. These maps will include full details of all lane geometries, positions and meaning of all road signs and traffic signals, and also details like the texture of the road or the 3-D shape of objects around it. They may also include potholes, parking spaces and more.

The maps perform two functions. By holding a representation of the road texture or surrounding 3D objects, they let the car figure out exactly where it is on the map without much use of GPS. A car scans the world around it, and looks in the maps to find a location that matches that scan. GPS and other tools help it not have to search the whole world, making this quick and easy.

Google, for example, uses a 2D map of the texture of the road as seen by LIDAR. (The use of LIDAR means the image is the same night and day.) In this map you see the location of things like curbs and lane markers but also all the defects in those lane markers and the road surface itself. Every crack and repair is visible. Just as you, a human being, will know where you are by recognizing things around you, a robocar does the same thing.

Some providers measure things about the 3D world around them. By noting where poles, signs, trees, curbs, buildings and more are, you can also figure out where you are. Road texture is very accurate but fails if the road is covered with fresh snow. (3D objects also change shape in heavy snow.)

Once you find out where you are (the problem called ‘localization’) you want a map to tell you where the lanes are so you can drive them. That’s a more traditional computer map, though much more detailed than the typical navigation app map.

 

Read the rest of this article at the source: Robohub.

Under the Hood of Luminar’s Long-Reach Lidar

Gif: Luminar Technologies/IEEE Spectrum

Shifting to a longer wavelength that’s safer for the eye lets Luminar raise its lidar power enough to stretch its range beyond 200 meters. Other innovations could cut system costs.

Current automotive lidars scan their surroundings by firing pulses from semiconductor diode lasers emitting at 905 nanometers in the near infrared and recording reflected light to build up a point cloud mapping the car’s surroundings. But laser-safety rules in the U.S. and other countries restrict the power in the laser pulse, limiting the lidar’s range to 30 to 40 meters, too short a distance for a car to stop safely at highway speeds. Makers of autonomous cars need to spot low-reflectivity objects at least 200 meters away to give the car enough time to identify hazards and stop, so they turned to other technologies. At least one lidar maker, however, kept tinkering. 

———-

Read the rest of “Under the Hood of Luminar’s Long-Reach Lidar” from the original source: IEEE Spectrum Cars That Think.

Can we test robocars the way we tested regular cars?

Can we test robocars the way we tested regular cars?: “

I’ve written a few times that perhaps the biggest unsolved problem in robocars is how to know we have made them safe enough. While most people think of that in terms of government certification, the truth is that the teams building the cars are very focused on this, and know more about it than any regulator, but they still don’t know enough. The challenge is going to be convincing your board of directors that the car is safe enough to release, for if it is not, it could ruin the company that releases it, at least if it’s a big company with a reputation.

We don’t even have a good definition of what ‘safe enough’ is though most people are roughly taking that as ‘a safety record superior to the average human.’ Some think it should be much more, few think it should be less. Tesla, now with the backing of the NTSB, has noted that their autopilot system — combined with a mix of mostly attentive but some inattentive humans, may have a record superior to the average human, for example, even though with the inattentive humans it is worse.

Last week I attended a conference in Stuttgart devoted to robocar safety testing, part of a larger auto show including an auto testing show. It was interesting to see the main auto testing show — scores of expensive and specialized machines and tools that subject cars to wear and tear, slamming doors thousands of times, baking the surfaces, rattling and vibrating everything. And testing the electronics, too.

 

Source: Robohub.

CARNAC program researching autonomous co-piloting

Credit: Aurora Flight Sciences.

DARPA, the Defense Advanced Research Projects Agency, is researching autonomous co-piloting so they can fly without a human pilot on board. The robotic system — called the Common Aircraft Retrofit for Novel Autonomous Control (CARNAC) (not to be confused with the old Johnny Carson Carnac routine) — has the potential to reduce costs, enable new missions, and improve performance.

Unmanned aircraft are generally built from scratch with robotic systems integrated from the earliest design stages. Existing aircraft require extensive modification to add robotic systems.

RE2, the CMU spin-off located in Pittsburgh, makes mobile manipulators for defense and space. They just received an SBIR loan backed by a US Air Force development contract to develop a retrofit kit that would provide a robotic piloting solution for legacy aircraft.

 

Source: Robohub.

Osram’s Laser Chip for Lidar Promises Super-Short Pulses in a Smaller Package

Osram’s Laser Chip for Lidar Promises Super-Short Pulses in a Smaller Package: ”
With mass production, it should cost around US $40—less than one percent of the price of today’s least expensive revolving sets

Image: Osram Opto Semiconductors

Those twirling banks of lasers you see atop experimental robocars cost plenty, wear fast, and suck power. The auto industry yearns to solve all those problems with a purely solid-state lidar set that designers can hide behind the grill of a car.

Their wish will come true next year, according to Osram Opto Semiconductors. The company says test samples will be available in 2017, and that commercial models could arrive in 2018. With mass production, the price should drop to around 40 Euros (US $43.50), says 
Sebastian Bauer, the product manager for Osram, in Regensburg, Germany. 

By comparison, Velodyne’s rooftop lidar towers cost $70,000 and up, and that company’s new, hockey-puck-size model runs around $8000.

Osram’s contribution to the lidar system is a laser chip whose four diodes were all made in one piece, then separated. That means they’re perfectly aligned from the get-go, with no need for after-the-fact fiddling. Each channel fires in sequence, so the returning signal can be matched to its source, thus enabling the system to add each petty piece of angular scope into an ensemble capable of sweeping a large vertical swath.

‘You need that for the forward-looking lidar,’ Bauer says’Think of a car travelling along a hilly road, where a single beam’s not enough—often, you’ll just see the sky.’

The other key part of the lidar is a tiny array of mirrors to steer the laser beam. That’s being provided by Osram’s partner, Innoluce, which Infineon Technologies acquired last month. The mirrors are part of a microelectromechanical system (MEMS), so they move on a tiny scale. The MEMS chip can operate at up to 2 kilohertz, scanning the environment for the data a car needs to perform 3D mapping. 

Osram’s four lasers each pulse for 5 nanoseconds, just one-quarter as long as the one-channel laser the company now makes for emergency stopping systems and other functions. Because the laser quickly reaches peak power and then winks out, it can support a robust peak power of 85 watts while shining for only 0.01 percent of the time. That makes it safe for the eyes.

The overall lidar system covers 120 degrees in the horizontal plane, with 0.1 degree of resolution, and 20 degrees in the vertical plane, with 0.5 degree of resolution. In the light of day, it should detect cars from at least 200 meters away, and pedestrians at 70 meters out.

How did Osram generate such short, powerful pulses? ‘We integrated the driver chip from a partner and included a capacitor in the package, keeping the distance between the parts short, so there’s little inductance,’ Bauer says. 

It seems straightforward enough. Why, then, didn’t somebody do it earlier?

‘Good question,’ Bauer laughs. ‘It’s because the lidar market took off just a few years ago, and before that, it was all about these towers on top of the cars. And there were hard problems to solve—it’s not enough to work just in the lab; it has to pass stress tests. It has to be good enough for automotive use.’

 

Source: IEEE Spectrum Cars That Think.

It’s Now (Temporarily) Legal to Hack Your Own Car

Photo: iStockphoto

You may own your car, but you don’t own the software that makes it work— that still belongs to your car’s manufacturer. You’re allowed to use the software, but in the past, trying to alter it in any way (including fixing it by yourself when it breaks or patching security holes) was a form of copyright infringement. iFixit, Repair.org, the Electronic Frontier Foundation (EFF), and many others think this is ridiculous, and they’ve been lobbying the government to try to change things.

A year ago, the U.S. Copyright Office agreed that people should be able to modify the software that runs cars that they own, and as of last Friday, that ruling came into effect. It’s good for only two years, though, so get hacking.

The legal and technical distinction between physical ownership and digital ownership is perhaps most familiar in the context of DVD movies. You can go to the store and buy a DVD, and when you do, you own that DVD. You don’t, however, own the movie that comes on it: Instead, it’s more like you own limited rights to watch the movie, which is a very different thing. If the DVD is protected by Digital Rights Management (DRM) software, the Digital Millennium Copyright Act (DMCA) says that you are not allowed to circumvent that software, even if you’re just trying to watch the movie on a different device, change the region restriction so that you can watch it in a different country, or do any number of other things that it really seems like you should be able to do with a piece of media that you paid 20 bucks for.

 

Source: IEEE Spectrum Cars That Think.

Turtlebot3, the Open Source Ubuntu/ROS-Based Robot Kit

Turtlebot3, the Open Source Ubuntu/ROS-Based Robot Kit: “The TurtleBot3 was built by Robotis, and the project is maintained by Open Robotics, a taxable subsidiary of the Open Source Robotics Foundation (OSRF). Open Robotics also maintains the open source ROS stack that runs on the other TurtleBots as well as scores of other robots and drones. Other collaborators include Intel, which contributed the Intel Joule module and Intel RealSense cameras, and Onshape, which supplies its full-cloud 3D CAD editor for working with the TurtleBot’s open source 3D CAD files.

In addition to leveraging LiDAR and RealSense for location, the TurtleBot3 gains precise spatial data by integrating a pair of Robotis Dynamixel smart actuators in the two sprocket wheel joints. Both TurtleBot3 models run on a Robotis OpenCR board for controlling the Dynamixels and sensors, such as the board’s built-in IMU, as well as touch, IR, color, and others. The board can run Arduino IDE code on its 216MHz, 32-bit Cortex-M7 STM32F7 MCU. The high-end STM32F7 includes an FPU, and is said to offer 462 DMIPS performance.

The open spec OpenCR control board has 18 GPIO pins, 32 Arduino pins, and 3.3V, 5V, and 12V power supplies. There are three RS485 and three TTL interfaces for controlling the Dynamixels. Other peripherals include three UART, five ADC, four 5-pin OLLO, and single CAN, SPI, and I2C connections. A micro-USB port lets you connect with a PC, and various LEDs and buttons are available.

TurtleBot3 can be teleoperated by an Android app, as well as wireless devices such as a keyboard, PS3 and XBOX 360 joysticks, the Robotis RC100 controller, the LEAP Motion controller, and more. The open source navigation software runs on Ubuntu 16.04.2 with ROS Kinetic.”

(Via Open Electronics.)

Nvidia Opens Up The “Black Box” of Its Robocar’s Deep Neural Network

Images: NVIDIA
The features marked in green form high-priority focus points for the deep neural network.

A deep neural network’s ability to teach itself is a strength, because the machine gets better with experience, and a weakness, because it’s got no code that an engineer can tweak. It’s a black box.

That’s why the creators of Google Deep Mind’s AlphaGo couldn’t explain how it played the game of Go. All they could do was watch their brainchild rise from beginner status to defeat one of the best players in the world. 

———-

Read the rest of “Nvidia Opens Up The “Black Box” of Its Robocar’s Deep Neural Network” from the original source: IEEE Spectrum Cars That Think.