Demonstration Day

This is a huge update!

Since the last post, ELEC 490 demonstration day has come and passed, and our project took first place! A ton of work occurred over the week before, so I'll try to cover it all in this post.

First up- the vehicle.

After we installed the two rear linear actuators, it was time to get them all active. Chris mounted the LAC boards, the Pi 2 and a breadboard for distribution, and then designed an additional custom battery to run the actuators and the Kinect. The LAC boards can take a number of different inputs, but we wanted to work around the capabilities of the Pi. The best way to do that was to use pulse width modulation on the general purpose I/O. Chris found some packages that would extend this capability to the Pi in an easy-to-use library. Unfortunately for us, the best software option wasn't yet available for the Pi 2, so we used a slightly lower level library called Servo Blaster. This would let us write commands directly to a file on the Pi with options for PWM frequency and incremental width size. Each active output would take at most ~1% of the CPU, which isn't much but that did beg the question- would the Pi be able to handle the whole load, point cloud processing included?

As it turns out, I learned the answer to this by a frustrating effort to run our main code on my acer C720. As we developed the main script, it required more and more computation during each new data cycle. Even when we were just running SAC segmentation, I could tell that processing rate would become an issue- but I didn't anticipate it would impede us as much as it did. After adding in a feature to perform multi-cluster extraction, no output was available in RViz- the default ROS visualization tool. Even when debugging the program I failed to get any output- even simple cerr messages were failing within the main callback function! After checking a plethora of possible solutions, including throttling the input rate from 30 fps down to 1, I realized that the problem was my own system's resources. It couldn't get through the last callback before a new frame of data reached it- resulting in a useless script. Guess it's time to upgrade! After running it on hardware with four times the RAM and less OS overheard, it worked- really well! But that did lead to a resounding answer to the earlier question- could we do this on the Pi? Absolutely not... so we had to change the plan. The point cloud processing could occur on a laptop, and we'd send the relevant output to the Pi for control. This would decrease the standalone ability of the system, but it was a necessary compromise given our limited hardware. Fortunately, it also meant we could run the Kinect off AC power instead of the bootstrapped battery solution.

So now it was a couple days before the 490 demo, and we finally had a consistent obstacle extraction process for our point cloud. We could also control the actuators relatively well on the car (minus one due to an accidental short). But we still had to figure out a good way to determine the geometry of the forthcoming obstacles and turn that into a control output. With not much time to go, Dom whipped up a clever and straightforward way to determine the primary obstacle's height, length and distance away. Since this was all relative to the camera's perspective, I figured out a way to use the ROS tf function to transform those coordinates into a frame relative to the front wheels. Of course, to do this accurately we needed a finalized mounting position for the Kinect camera on the car!

The Kinect mount was a fun challenge motivated by the main limitation of the Kinect- a blindness to anything closer than 0.475 meters. We wanted to figure out how to extend this camera as far up and back from the vehicle as possible, to ensure that the road immediately in front of the car is visible. I tracked down an aluminum tube that could be cut to fit roughly on to the back bumper and could then be secured to the rear crossbar on the roof. Next, I 3D printed a Kinect base mount using a design off Thingiverse, which was intended for camera tripods. It had room for a nut to secure the Kinect to a tripod bolt, but in our case we secured the bolt through a block of wood which fit snugly into the aluminum tube. This whole apparatus kept the Kinect rigid in place without being subject to any stresses which could warp the lenses or the body.

After the Kinect was mounted, we mapped the frame transformation from the lens to the front wheels, and we were on to controls!

This ended up being some rough work- in particular due to the networking issue we had due to a simple mistake, which took a few hours to diagnose. We decided to SSH from my laptop into the Pi in order to run the entire system on ROS nodes across the two machines. However, when Chris initiated a new user on the Pi, he chose the password '12345'. Nothing wrong with that, right? Well- as it turns out, we had a keyboard mapping issue and the Pi wasn't reading my laptop's numerical output as the correct input. Only after throwing the rest of the book at this problem did we find out the true reason. After that, it was as simple as a new password. This time, no numbers. The more you know!

So we finally had the machines communicating to the same ROS master, and then it was a simple process to turn the laptop's point cloud node's output into a control algorithm for the Pi. Since the actuators are position-controlled, there was no need for a complex dynamic model to control the wheel forces. We simply mapped the obstacle height and distance, as well as the car's relative velocity, to a static value for the vertical actuation rate. The velocity was determined the old-fashioned way, by taking a discrete time derivative of the position. In retrospect, we should be using something more robust for this- i.e. a Kalman filter, but due to the time crunch we stuck to our first method. The downside of this was some noisy velocity data, but it's a quick fix once we get back to it.

This is what we brought to the ELEC 490 demo- and despite our team being zombies due to lack of sleep, we took home the win! It was absolutely stellar to receive that after all the effort we've put in so far, but there's still more to do. As of now, our system has proved a concept, but it'll take some better hardware and algorithms to really prove a point!