Media and Community Discussion

A video feature on the predictive suspension project is available here:



We're in talks with the Wevolver team to turn this project into a fully open-source platform for hackers. However, with the existence of some overlapping hardware projects (rovers, robots and drones), it's a bit more difficult to find a suitable hardware base that others can replicate easily. Our RC frame and chassis is entirely custom-built, which doesn't bode well for an open-source standardized project. It might be easier to contribute our designs and methods for vertical obstacle traversal towards a more established hardware project that uses ROS.

We'll be looking into this more, although our team has now graduated and we're getting busy with other things.

Posted

Demonstration Day

This is a huge update!

Since the last post, ELEC 490 demonstration day has come and passed, and our project took first place! A ton of work occurred over the week before, so I'll try to cover it all in this post.

First up- the vehicle.

After we installed the two rear linear actuators, it was time to get them all active. Chris mounted the LAC boards, the Pi 2 and a breadboard for distribution, and then designed an additional custom battery to run the actuators and the Kinect. The LAC boards can take a number of different inputs, but we wanted to work around the capabilities of the Pi. The best way to do that was to use pulse width modulation on the general purpose I/O. Chris found some packages that would extend this capability to the Pi in an easy-to-use library. Unfortunately for us, the best software option wasn't yet available for the Pi 2, so we used a slightly lower level library called Servo Blaster. This would let us write commands directly to a file on the Pi with options for PWM frequency and incremental width size. Each active output would take at most ~1% of the CPU, which isn't much but that did beg the question- would the Pi be able to handle the whole load, point cloud processing included?

As it turns out, I learned the answer to this by a frustrating effort to run our main code on my acer C720. As we developed the main script, it required more and more computation during each new data cycle. Even when we were just running SAC segmentation, I could tell that processing rate would become an issue- but I didn't anticipate it would impede us as much as it did. After adding in a feature to perform multi-cluster extraction, no output was available in RViz- the default ROS visualization tool. Even when debugging the program I failed to get any output- even simple cerr messages were failing within the main callback function! After checking a plethora of possible solutions, including throttling the input rate from 30 fps down to 1, I realized that the problem was my own system's resources. It couldn't get through the last callback before a new frame of data reached it- resulting in a useless script. Guess it's time to upgrade! After running it on hardware with four times the RAM and less OS overheard, it worked- really well! But that did lead to a resounding answer to the earlier question- could we do this on the Pi? Absolutely not... so we had to change the plan. The point cloud processing could occur on a laptop, and we'd send the relevant output to the Pi for control. This would decrease the standalone ability of the system, but it was a necessary compromise given our limited hardware. Fortunately, it also meant we could run the Kinect off AC power instead of the bootstrapped battery solution.

So now it was a couple days before the 490 demo, and we finally had a consistent obstacle extraction process for our point cloud. We could also control the actuators relatively well on the car (minus one due to an accidental short). But we still had to figure out a good way to determine the geometry of the forthcoming obstacles and turn that into a control output. With not much time to go, Dom whipped up a clever and straightforward way to determine the primary obstacle's height, length and distance away. Since this was all relative to the camera's perspective, I figured out a way to use the ROS tf function to transform those coordinates into a frame relative to the front wheels. Of course, to do this accurately we needed a finalized mounting position for the Kinect camera on the car!

The Kinect mount was a fun challenge motivated by the main limitation of the Kinect- a blindness to anything closer than 0.475 meters. We wanted to figure out how to extend this camera as far up and back from the vehicle as possible, to ensure that the road immediately in front of the car is visible. I tracked down an aluminum tube that could be cut to fit roughly on to the back bumper and could then be secured to the rear crossbar on the roof. Next, I 3D printed a Kinect base mount using a design off Thingiverse, which was intended for camera tripods. It had room for a nut to secure the Kinect to a tripod bolt, but in our case we secured the bolt through a block of wood which fit snugly into the aluminum tube. This whole apparatus kept the Kinect rigid in place without being subject to any stresses which could warp the lenses or the body.

After the Kinect was mounted, we mapped the frame transformation from the lens to the front wheels, and we were on to controls!

This ended up being some rough work- in particular due to the networking issue we had due to a simple mistake, which took a few hours to diagnose. We decided to SSH from my laptop into the Pi in order to run the entire system on ROS nodes across the two machines. However, when Chris initiated a new user on the Pi, he chose the password '12345'. Nothing wrong with that, right? Well- as it turns out, we had a keyboard mapping issue and the Pi wasn't reading my laptop's numerical output as the correct input. Only after throwing the rest of the book at this problem did we find out the true reason. After that, it was as simple as a new password. This time, no numbers. The more you know!

So we finally had the machines communicating to the same ROS master, and then it was a simple process to turn the laptop's point cloud node's output into a control algorithm for the Pi. Since the actuators are position-controlled, there was no need for a complex dynamic model to control the wheel forces. We simply mapped the obstacle height and distance, as well as the car's relative velocity, to a static value for the vertical actuation rate. The velocity was determined the old-fashioned way, by taking a discrete time derivative of the position. In retrospect, we should be using something more robust for this- i.e. a Kalman filter, but due to the time crunch we stuck to our first method. The downside of this was some noisy velocity data, but it's a quick fix once we get back to it.

This is what we brought to the ELEC 490 demo- and despite our team being zombies due to lack of sleep, we took home the win! It was absolutely stellar to receive that after all the effort we've put in so far, but there's still more to do. As of now, our system has proved a concept, but it'll take some better hardware and algorithms to really prove a point! 

Posted

Progress in all directions

Today was a focused effort on multiple parts of the project.

Chris and I worked on getting the actuators mounted, which wasn't too hard because the Firgelli actuators were a close fit in place of the coilovers. Except for one collision. The RC truck has an aggressive 4-link triangulated front and rear suspension geometry, built for stability in extreme rock-crawling conditions. The two upper links blocked the actuators but we got around it by flipping the upper links. Gave up a slight amount of ground clearance at the centre, but it made for a great fit! No need for any modifications to the mounting points, which helped speed things along.

After Dom completed the open house organizational forms, we started working on getting the Perception package (basically Point Cloud Library + frills) running through ROS on the Pi. Our initial plan was to build it from source, since the default ROSberry Pi configuration didn't officially support the PCL package. Knowing this would be a lengthy endeavour, we looked into cross-compiling via another Ubuntu instance. After hitting some snags, I stumbled upon an option that we hadn't known about before. Since the new Raspberry Pi 2 has an ARM processor, there is finally a way to run regular Ubuntu instead of Raspbian or other RasPi-specific distros! Time to change gears.

Chris got the actuators going using an arduino, but it required a voltage divider to step the 5V digital PWM output to 3.3V for the RasPi. Not a huge issue, but added complexity and more wires in a breadboard with sketchy connections to begin with. Luckily, we found that the RasPi has 3.3V GPIO (took too long to check!) and so we move away from the Arduino to start setting up PWM control on the Pi. Which meant we first needed to get ROS working on it!

This took a few hours and taught me a lot about how low-level Linux can be. I've been used to always having a desktop at least, but in this case I went as minimal as possible to fit everything on the smallish SD card and keep it relatively quick for the Pi. Bare bones Ubuntu 14.0.4.2? Better be good with vi! Well, I immediately got hit with SD formatting problems, network issues, package dependency errors, one-off unresolved bugs and then finally got it installed! Lesson learned? Never gloss over a step that doesn't make sense, just because you "might be able to get away with it". You won't. 

Big shout out to Stack Exchange and the whole community around Ask Ubuntu, and the wiki contributors of the world! Honestly... the only reason we can have nice things.


Finally, we're getting close to running everything on a stand-alone system! Next steps for us are wrestling with the Kinect drivers on the Pi, getting the rear actuators installed and getting a PWM manager working in ROS that can leave processing room for PCL functions!

-D



Posted

Time Is Of The Essence

The new term has begun, and with it- a countdown. The ECE open house on February 11th is our goal for a functional demonstration, and we have a long way to go! 

We've gotten planar segmentation to work through ROS- and it looks awesome!


Here's a sample of a textbook on a table- not far off the testing setup we're going for. Unfortunately, it doesn't run anywhere near real-time, even with down-sampling and minimal rendering requirements. So we either need less points or a better algorithm. Luckily, ELEC 474 has provided the nugget of wisdom towards a better solution.

With our current RANSAC algorithm, a number of random points are sampled, checked for consistency, and if it passes a threshold check, the points provide the parameters for the plane model. Then, the outliers are booted, and the plane is recalculated, iteratively, until the fit is snug. However, this doesn't work well on tons of points- it's just too much sampling for real-time segmentation and control.

Instead, we're going to try a method called the 3D kernel-based Hough transform (3DKHT) which uses a spherical sample space to fit multiple planes in an image. Described in a paper by F.A. Limberger & M.M. Oliveira, this method was shown to beat PCL's RANSAC algorithm on a Bremen data set by quite a significant margin: 2.1 seconds for 3DKHT vs 7531 seconds for RANSAC.

This should speed up developments nicely!

In other news, I had the RC car zipping around today and we're looking into manufacturing the test obstacles soon. Hopefully we'll have our traversal performance metrics within the week- we'll be measuring chassis movement so that we can determine how much better (or worse) the predictive suspension performs in the real-world.
Posted

Software Stack

As discussed previously, we've got a hardware base selected. This didn't occur completely separate to the software selection, but for the sake of clarity, I wanted to discuss them in unique posts. So here's a quick overview on how we selected our software stack.

The intention of this system is to find obstacles in the path of the RC car, and plan a traversal process for each wheel. The main input sensor is the Kinect, from which we receive a stream of RGBA-D images at 30 Hz. This stream of images will need to be analyzed to determine the road surface that the vehicle is driving on, and any protruding obstacles that are significant enough to require the suspension to act. In order to do this, the depth image needs to be represented as a point cloud, where each 'pixel' is translated to a 'voxel' with an (X Y Z) coordinate. In the format of a point cloud (pc), many operations can be run on the data to help facilitate the extraction of useful outputs. This will be discussed more in-depth later on, as the Point Cloud Library operations have a lot of complexities of their own. For now, we'll maintain a high-level view of the PCL operations. The pc will get passed through some basic filters and then we will run a planar segmentation algorithm to extract the primary and secondary planes. After that, we will perform Euclidean cluster extraction to obtain the most distinctive form of the surface obstacles.

The point cloud will then be reduced to the useful data- where the road is, where any obstacles reside, how the obstacles are shaped, and what the relative velocity is. The velocity is relative because in this context it doesn't matter if the vehicle approaching the obstacles, or if the vehicle is stationary and the obstacles are approaching, which is the assumption we're basing our model on. With this data, we want to plan the traversal. In order to do this, the first question is which wheels are going to hit the obstacle. This will be answered using the relative path of the obstacle towards either of the two wheels (for now, we're restricting motion to a straight line so that we can treat the forward and rear wheels as a linear vector). Once this is established, we need to know the vertical height of the object, which should be easy to obtain from the geometric form of the point cloud. Then we need to know when exactly the wheels will impact the obstacle, which is simply a matter of velocity. The exact method of finding and maintaining a velocity measurement is a flexible matter, but we are hoping to use the persistence of the point cloud geometry to interpolate a velocity straight from the image stream. Basically, analyzing how the obstacle approaches the vehicle every 30th of a second.

Finally, we have the bulk of our input analyzed, and we know which wheels have to move, how far they must be raised, and how soon it must occur. The last step is simply translating that into a meaningful control output for the actuators, and transmitting it. This will be done through ROS nodelets which simplify the process of continuously pulling in data, analyzing it, and pushing an output. There will be a fair amount of optimization work to ensure that this can all run at 30 Hz (or, more realistically, 10-15 Hz) but that summarizes the general concept of our software!

In the future, we'll discuss the nitty gritty details of the PCL components, the ROS node optimization and how we will tune the control algorithm to prioritize chassis smoothness for different obstacle cases.

Ciao,

-Danny

Posted

Hardware decision making

In the past few weeks, we've delved deeply into investigation of hardware options. So many possibilities exist for a free-form project such as this, it can quickly become overwhelming trying to establish a full-stack solution. Luckily, we had some touch points to help guide our decision. Here's how we proceeded through.

First off, we know our limiting factors. The most concrete element of this project is the RC car itself, which we simply cannot replace with our budget and time. So that's a constant, and the first dependent variable is the need for a linear actuator to raise and lower the wheels. Next, there's the requirement that everything from power, to computation, to actuation, has to fit within or atop the original chassis. This acts as our base problem statement.

Starting here leads us to an easy decision with actuators- the Firgelli line. As a Canadian company with a proven track record, we were happy to give their products a shot. They have a broad selection of sizes and power, and after measuring twice, we found the perfect size to fit in the space that the old spring suspension would leave. They come with many options for driver mode, we took the PWM input because of the robustness of such a simple method.  To sweeten the deal, we were able to get some of their new P16 models with a tremendous student discount. They're en route right now and we couldn't be more excited! Thanks, Firgelli! 

Next up, we have the Xbox Kinect. I bought this stellar 3D camera for only ten bucks, and I'm still thoroughly flabbergasted by what it can do, even after using it for a few months. It outputs a custom format RGBA-D image, which stands for Red Green Blue Alpha - Depth. Basically, it's a picture with additional depth info gathered from the time-of-flight sensing system. There's lots of great explanations for how exactly the Kinect gathers that info, so I won't elaborate here- but definitely worth a quick AskJeeves! 

Finally, we need a system to translate the depth map input to the wheel actuator output. This is, in general, a control system- so we need some kind of PLC (Programmable Logic Controller), right? But of course, this is 2015. There's no point in using over-specialized hardware when general computers can provide more than enough flexibility and firepower to manage this control problem. We decided on the recently released Raspberry Pi Model B +. This would provide 1 GB of RAM and 900 MHz of processing speed. We think this will be enough to handle the relatively heavy 3D processing load... after some downsampling, of course. More than anything, the minimal cost and wide support were the deciding factors here. We had allocated most of our budget towards the actuators, so we couldn't get any more processing power than this. No problem! Lots of the rough work can be accomplished on a laptop running Ubuntu, and we can definitely optimize for efficiency when we run it on the Pi alone.

So, that's the bulk of the hardware, apart from some of the minor acceleration and gyro sensors for testing and validation. What can we use to digitally glue all these components together? I'll discuss that next, stay tuned!

Posted

Introduction

Taking inspiration from the animal kingdom, it is easy to point out weaknesses in the dynamics of land-based vehicle control. As a cheetah runs, it requires the combined inputs of visual prediction, tactile sense and inherent habit. Using these inputs, the animal can adjust pressure points on specific parts of each foot, thereby shifting weight distribution and maximizing tractive force under all conditions. It is able to adapt to uneven terrain while keeping the head nearly level for ideal tracking.


An automobile, adversely, does none of these things. The most sophisticated suspension systems currently in use allow for the adjustment of damping and resistive levels, as well as ride height such as the optional Model S air suspension. This system lacks the additional ability to control individual contact point pressures. This decreases the capability for the Model S to manage variations in road conditions under high performance operation.

Imagine a blind cheetah. Operating at 60 miles per hour without a clue what it will be stepping on; not a recipe for success. It is likely to break a leg due to the jarring effects of a badly timed step. The automobile of today is blind to upcoming terrain, leading to the common effect of damaging suspension components and performing poorly.


Suspension in vehicles has not fundamentally changed since the leather and wood leaf springs of ancient Rome. The same leaf spring used in horse-drawn carriages still finds a place in heavy utility vehicles being produced today. Advancements since the industrial era have been found in the form of MacPherson struts, independent coilovers and multi-links in various arrangements to ensure the weight of a vehicle is shifted smoothly and evenly under high-G acceleration.

Currently, the German luxury trio of Mercedes-Benz, Audi and BMW have released limited systems utilizing active suspension systems. Most other auto manufacturers have ongoing developments as well. Many of these attempt to reactively adjust suspension components in response to just tactile feedback. A few are trying to implement linear electromagnetic generators to both control damping and generate electricity from the force of bumps. Of all released systems, Audi has the most progressive system detailed conceptually in a non-technical public video (Available here). Audi utilizes a front-facing camera to scan the road ahead in a narrow FOV and develop a depth-point map. This data gets built into a 3D model of upcoming road conditions which they then use to adjust suspension damping before the terrain is met.

A famous development by Bose Corporation engineers is the full-control electromagnetic suspension. Bose developed linear electromagnetic motors capable of providing enough force to raise an entire vehicle (Available here). Using proprietary control algorithms, the LEMs were installed as suspension components in a test vehicle.The Bose linear electromagnetic motors are used in a modified MacPherson strut-type layout. The lower suspension arms and rack-and-pinion steering system attach to an aluminum engine cradle that bolts directly to the car body. Each front wheel is fully independent, since no anti-roll bars are required.

This vehicle displayed marvelous capabilities in dynamic response. The wheels can apply pressure nearly independently of the body allowing for nearly level cornering, acceleration and braking. In the linked video, it was shown that the Lexus sedan could be made to jump over a curb entirely, using a hard-coded signal to the four LEMs. However, the system was almost entirely abandoned due to the inherent power issues of having large LEMs in a gasoline vehicle. A generator was required to run inside to provide suitable electrical power.

Over the past ten years, the rapid improvement in electric vehicles (due to battery technology and the associated power electronics) has led to a reconsideration of the usage of active suspension. Furthermore, more efficient linear electromagnetic motors have lowered power demands, such as the SKF design developed at the Eindhoven University of Technology. 

With this project, we aim to investigate the performance of an active suspension when used in conjunction with another rapidly improving technology - 3D machine vision. With increasing mobile computing power and improved sensors, we can now effectively implement 3D machine vision in a suitably small package for low-cost testing. On top of that, vehicles will soon be utilizing LIDAR systems more commonly for autonomous driving - these can be utilized for preview control algorithms to manipulate an active suspension system.

It is a better time than ever before to consider the usage of vision-based active suspension for a vehicle. In the next post, we'll discuss how our student team aims to implement these concepts into a test prototype.

Posted