Tag Archives: Kalman Filter

Kalman Filtering

As a gift this last Christmas, I received a Canon PowerShot SX230HS. This camera has the ability shoot high-speed video at 120 and 240 frames per second. I knew immediately that I wanted to use this feature to do some physics, tracking, and signal processing experiments. There is nothing that incorporates these three things more than a Kalman filter. This post will detail some of the basic Kalman filtering experiments this camera has allowed me to explore.

The first experiment tracks an object in free fall and uses a Kalman filter to estimate the position, velocity, and acceleration due to gravity. The second experiment looks at a harmonic oscillator consisting of a weight suspended on an elastic band and uses an extended Kalman filter to estimate the damping ratio, the natural frequency, and the band’s spring constant. I’ll post all the data and Matlab M-Files for the filtering for those that are interested at the end of the post.

Camera Calibration

Before diving into the experiments, I thought it would be worthwhile to double check the manufacturer’s claim on the frame rate of the high-speed video modes. The manual states that the camera can operate at both 120 FPS and 240 FPS, however they give no tolerance values on these rates. To check the true high-speed frame rate, I created a simple system that used the FPGA from the previous post, to turn on a series of LEDs for 1 millisecond once every second. I then filmed these lights in the high speed mode and used the lights as a 1 second time-stamp.

In the picture below you can see the images of the FPGA immediately before and during the LED illumination. To identify the frame which contained the illuminated LEDs, I integrated the total energy in the white box, shown below, for each frame.

The graphic below shows the integrated energy from frame to frame while shooting at 240 FPS. The LEDs are able to provide a very high SNR time mark that allows for easy identification. It’s interesting to note that every fifth second, there appears to be a slight dip in the total energy. I believe that in this case, the 1 ms illumination is actually straddling two frames, so the energy is spread compared to the other cases.

To help visually track the actual frame rate, I reshaped the 1D plot above, into a 2D image with columns of length 240. If the frame rate were truly 240 FPS, the 2D image should have a single bar horizontally across the image, indicating that 240 frames represents 1 second exactly. As you can see below, the line is not horizontal, and instead indicates that the frame rate is slightly higher than 240 FPS. Also, this image makes it clear that the dips in energy seen above are due to the illumination spreading across two frames.

To get a better idea of the exact frame rate, I plotted the deviation from an ideal 240 FPS below. There is a clear linear trend, with a slope of approximately 0.386 Frame/Sec. This indicates that the true frame rate of the camera, shooting at the supposed 240 FPS, is 240.386 FPS. I performed a similar check at the 120 FPS and found the rate to be about 120.12 FPS.

Armed with an accurate estimate of the true time between frames, I was then able to proceed with the data collection and tracking.

Falling Weight

The setup for the first experiment consisted of the camera mounted on a tripod filming a weight falling in front of a wall. The wall contained a measuring stick to allow me to make accurate measurements of the distance traveled. An image from the movie is shown below, with the weight in free fall.

The tracking was done in Matlab and consisted of very simple blob detection. The goal of this experiment was not to test out computer vision algorithms, it was to use the high-speed camera to get tracking data, so I used a very simple algorithm to detect the weight. The displacement data could have just as easily come from me manually identifying the location of the weight from frame to frame. One thing to note, I waited a few frames after the weight left my hand to begin tracking. This ensured that there were no unmodeled forces on the weight, but also introduced a slight initial velocity to the object.

The plot below shows the measured displacement along with a quadratic fit to this line. As you can see, the data is quite clean and the fitted line is very close to what I would expect for this setup. The thing to remember about this fit, is that I had to wait until all the data was collected and then perform the least squares fitting, the Kalman filter allows this calculation to be done recursively with each new sample.

Kalman Filter Tracking

The first step in using the Kalman filter is identifying the model to represent the system. I chose to model the falling weight as a mass moving with constant acceleration. This model ignores the effects of air resistance, but for a free fall of this sort distance in still air, this model seems reasonable. The continuous state-space model is shown below with w(t) representing a random acceleration, i.e. the process noise.

The representation shown above is not the only way to model an object in free fall. Since the acceleration due to gravity on Earth is well known and constant, I could incorporate this into the model and reduce the state-vector to 2 dimensions. Instead, I chose to augment the state vector and allow the filter to converge to the acceleration. This convergence will provide a check on the overall performance of the filter.

For this model to be of use in the Kalman filter, it must be moved from the continuous representation into a discrete form. This requires finding the state-transition matrix that will propagate the current state to the state at the next sampling interval. This calculation is shown below.

In addition to the state-propagation, we must also compute the process noise covariance matrix. This requires a matrix integral, but for this 3×3 state-transition matrix, the math is not too difficult. The graphic below shows this calculation.

With these matrices in hand, we can essentially plug and chug with the Kalman filter equations to process the data. The actual code to track the data with the Kalman filter is very straightforward and consists of just 9 lines of m-code (including code to save the variables). I’ve included the entire m-file at the end of the post, but I thought I would also post the core Kalman code below.

Kalman Filter Matlab CodeIn the code above, A represents the state-transition matrix, D represents the measured displacement, H is the measurement matrix ([1 0 0]), and R is the measurement noise (estimated from the data). What’s not shown above is the initialization of the X and P vectors. The initial diagonal elements of the covariance matrix P give an indication in the confidence of the initial state vector X. A small value in P represents higher confidence in the initial value, while a large value represents less certainty. For this experiment I set the initial values of P quite large (compared to the initial X values). If the chosen model is a good fit for the process, the covariance matrix P should converge indicating the system is incorporating the measurements with the model to reduce the uncertainty in the state estimate.

Results

The Kalman filter code above will produce and store three pairs of variables, the state estimates along with their covariances, as well as the Kalman gain at each step. The state variables and associated variances will allow us to inspect both the current estimate along with the uncertainty present in that estimate. The Kalman gain will allow us to look at how the filter is incorporating new measurements. Initially, we should expect that the Kalman gain is relatively high, (i.e. the filter needs to integrate new information) but as time progresses, if the filter is accurately modeling the system, the Kalman gain should fall, indicating the a-priori measurements are accurately predicting the next state.

The plot of the estimated displacement with time is shown below. It’s difficult to make out, but there is a blue line representing the displacement and a dashed black line above and below the displacement representing the std. deviation of the position error. The minimal error present in this state variable throughout the entire filter duration is due to the high SNR measurement used as input to the filter. The filter is balancing the error from the measurement with the error from the model, but it will never produce an estimate with more error than the measurement.

The next plot shows the estimated velocity with time and begins to demonstrate the ability of the Kalman filter to reduce the error in the state estimate. I chose an initial velocity estimate of 0 m/s with a std. deviation of 5 m/s. As the filter took in more data, the estimated error quickly converged, this can be visualized with how tightly the dashed line borders the estimate.

The final state variable, the acceleration, allows some independent confirmation that the filter is accurately modeling the system dynamics. Where as the velocity plot above seems good given the convergence of the error, it is difficult to know if the estimated velocity represents the true velocity of the weight. The acceleration, on the other hand, should converge to the well known 9.8 m/s^2. As you can see in the plot below, this is exactly what happens.

Kalman Filter EstimateThe acceleration was initialized at 2 m/s^2 with a std. deviation of 10 m/s^2. The final estimate of the acceleration was approximately 9.82 m/s^2 with a std. deviation of approximately 0.2 m/s^2. So this is inline with what we would expect with the acceleration estimate.

The final aspect of the filter operation that should be inspected is how the Kalman gain varies with time. The plot below shows the Kalman gains for the individual states vs. time.

Kalman Filter Gain

Initially the Kalman gains are relatively high, but as time progresses and the state estimate and model are able to better predict the system and the gains begin to fall.

I think it is worthwhile to look at how the Kalman velocity and acceleration estimates compare with numerical estimates of the first and second derivatives of position. I’m going to use a finite difference approximation to the derivative instead of something like a Savitzky-Golay filter since this will offer a derivative estimate with minimal delay. The first plot below shows the numerical derivative approximation of the velocity.

This velocity estimate is clearly more noisy than the Kalman estimate, but the overall trend is still clear and correct. It’s interesting to look at what happens between 300 and 350 ms, there is a clear disruption in the data which causes the velocity estimate to jumpy wildly. This disruption has minimal impact on the Kalman velocity estimate, which is to be expected since by that point the filter is relying more on the a-priori estimates as evidenced by the low estimate error and Kalman gain.

The plot below shows the numerical derivative estimate of the acceleration. This estimate is wildly inaccurate and serves as a nice comparison to the much more accurate Kalman estimate.

I’ve created a small video incorporating the Kalman filtering results above and also showing the high-speed video used to perform the tracking. In doing all the data processing to get these results, I sort of lost track of how quickly everything happens. It is one thing to see a plot with a duration of 400 ms, but it’s another to watch the convergence in real-time or even slowed down 6x as in the video.


 

The video shows the weight falling three times. The first time shows the raw video slowed down 6 times. The second repetition overlays a green dot on the tracked position of the weight. The final repetition shows the online convergence of the acceleration state parameter.

Extended Kalman Filter

After fooling around with the Kalman filter, I wanted to explore the extended Kalman filter (EKF) a little. An EKF is an extension of the Kalman filter that can handle non-linear system models. A common use of the EKF is for model parameter identification. For any model of relative complexity there are usually a variety of constants that must be known (e.g. mass of an object, coefficient of friction, temperature of surroundings, etc…). When these are known constants they can be used to populate the state-transition matrix resulting in a linear update equation. If, however, these parameters are not known, the state estimate vector can be augmented with the unknown parameters. This allows the filter to attempt to estimate the parameters, but it introduces non-linearity into the system since now the state-transition matrix is dependent on the state. It’s this idea of using the EKF to identify model parameters that I wanted to explore.

The specific model I decided to use was a harmonic oscillator, specifically a weight bouncing on an elastic string. The rationale behind this was two fold. One, there aren’t many systems that have such a nice physical model and can be implemented as easily as a harmonic oscillator. Two, one of my lab mates, Dalong Liu, published a great paper on using an EKF to estimate tissue viscoelastic parameters by modeling a tissue construct as a forced harmonic oscillator excited by an acoustic radiation force pulse. This work, and also more recent work done by Mehul Soman, has given me some familiarity with using EKFs for parameter identification.

The general setup of this experiment is very similar to the previous setup. A camera was mounted to film a weight bouncing on an elastic string in front of the same wall as the previous experiment. To begin the oscillations, I pulled the weight down from its natural resting location and released it. I attempted to pull the weight straight down with as little side-to-side movement as possible. The tracking used the same blob detection as before, but only measured the vertical displacement. I made no attempt to record or model the side-to-side movement of the weight throughout its oscillations, instead, these deviations are incorporated into the process noise of the system. The graphic below shows a single frame from the high-speed movie.

From this video, I was able to obtain the following displacement data. Here again, the data is of relatively high quality and the motion, at least qualitatively, appears to be representative of a harmonic oscillator. It’s clear that the motion is under-damped with a period of just over a second.

The differential equation representing a harmonic oscillator is well known and shown below. Here the process noise is represented by w(t). Given the initial conditions of the oscillator, and knowledge of the damping ratio (xi) and the natural frequency (omega), this formula will describe the subsequent motion.

To help make this equation more tractable, we can reformulate it in its state space representation. If we ignore the process noise for the moment, the state space representation of this systems is shown below. The first element of the state vector represents the position of the object while the second element represents the velocity.

Just like the Kalman filter, the state-space equation here represents the continuous time model. To be of use in the EKF, we must be able to convert this formulation to a discrete model. If we know the natural frequency, the damping ratio, and the sampling interval then we can quickly identify the state-transition matrix as we did above. What I didn’t talk about previously, is that the identification of this state-transition matrix is a computationally intensive process. For the Kalman filter this was not much of an issue since we only needed to compute the matrix once and it could be done offline, with the EKF, things are not that simple. Once we augment the state-vector with the damping ratio and the natural frequency, the state matrix is no long time-invariant and will depend on the current state. This implies, that if we are to use the techniques from above to estimate the state-transition matrix, we will need to raise e to the matrix power at each prediction update. Given how expensive this operation is, it is worthwhile to look at a few alternatives.

Specifically, I looked at two, one, using a first-order Taylor series approximation of the state-transition matrix, and two, using an Runge-Kutta 4th order to propagate the state. To test the performance of these two schemes at propagating the state-vector, I tested them on an ideal harmonic oscillator with a natural frequency of 2*pi*3, and a damping ratio of 0.1. I looked at the ability of these two methods to accurately propagate the state across a 100 ms interval with various time steps. The phase plane for this system is shown below, with the integration results from the Taylor series approximation shown in red and the Runge-Kutta results shown in black. The red dots represent the starting and ending points on the true phase plane trajectory. Theses results here are from breaking the 100 ms interval into 4, 25 ms intervals to perform the integration over.

Harmonic Oscillator Phase Plane

The RK4 method is clearly superior than the first order Taylor series approximation, but it also required more operations. Perhaps, if I decreased the integration step size I could achieve similar performance with the Taylor series approximation, lightening the computational burden? To look at this, I measured the norm of the integration error for both methods with various integration step sizes. The log compressed results are shown below.

The error in the RK4 method very quickly drops to what is functionally equivalent to no error. The Taylor series approximation does not drop off nearly as quickly. In fact, to reach the same level of error present in the RK4 method with only 3 integration steps across the 100 ms, the Taylor series approximation requires just over 600 integration steps. These results justify the additional computation required with the RK4 method.

The other computation required during the prediction step is the covariance update step. Unlike the Kalman filter, where the state-transition matrix was used to update the covariance. In the EKF, we must use the Jacobian of the derivative function f. This is relatively straightforward to calculate and is shown below.

With the state update method chosen and the matrices needed for the covariance update identified, we now have everything in place to process the data. The only remaining step is to choose the initial state vector and the initial covariance estimates. I chose the initial position estimate based on the first analyzed video frame, and the velocity I really just guessed a reasonable value (60 cm/s). For both the position and velocity, I chose very large initial covariance values, 10,000 and 1,000,000 respectively. I also tried to choose reasonable values to initialize the damping ratio and the natural frequency, 0.1 and 3 Rad/S. I also chose large covariance estimates for these values as well, 100 and 1,000.

 The plot below shows the estimated displacement overlaid on the measured displacement. As was the case in the Kalman filter above, this estimate was very much in line with the measured displacement.

Extended Kalman FilterThe high-speed video had a relatively low image size, introducing quantization effects into the measured data. The plot below shows a close up of the measured data and the estimated data. You can clearly see the quantization present in the measured data, but the estimated data does not suffer from these effects.

I’ll skip the velocity state variable and move directly to the damping ratio and the natural frequency. Both of these variables showed nice, fast convergence. The plot of the damping ratio below shows that within half a second, the covariance associated with this state has already converged to a relatively low value. The steady state value of the damping ratio also makes intuitive sense. The displacement plots clearly shows an under damped oscillator, and a damping ratio of ~0.07 is very much under damped.

The next plot shows the estimate of the natural frequency. For reference, the initial value of the natural frequency was 3 Rad/S with a std. dev. of just over 30 Rad/S. Within 1 second the std. dev. of this estimate has decreased to around 1 Rad/S, and continues to decrease throughout the observation period.

It’s actually possible to independently check the validity of the estimated natural frequency. The natural frequency of a harmonic oscillator consisting of a weight on a spring is given by the square root of the spring constant over the mass of the weight. I measured the mass of the weight used in these experiments as approximately 138 grams. Measuring the spring constant took a little improvisation, I used quarters as a standard weight to measure the displacement due to two different forces. I first measured the displacement with no quarters attached to the weight, I then taped 7 quarters to the weight to increase the mass by 39.7 and measured the displacement a second time. Under this load, the measured displacement was  15.2 cm. This gives a spring constant of about 2.5 N/M, giving a natural frequency of about 4.3 Rad/S. This estimate is very close to the estimated frequency of 4.37 Rad/S.

The video below shows the parameter convergence along with the high-speed video at 6 times slower than real-time.

Moving forward, I’d like to use this same data to test out some more advanced Bayesian filters such as the unscented Kalman filter or particle filters. It would be interesting to compare various aspects of these various filters (e.g. overall convergence time, stability, sensitivity to initial guess, etc…). I’m also looking to collect additional data some well behaved dynamic motion. Right now I’m thinking I’ll try to collect some data of a free spinning bicycle tire or a some sort of a projectile (maybe a nerf bullet or a marble launched by a rubber band catapult). I’d like to use these filters to help predict future states as well. For instance, with the bicycle tire, could I predict at what point in the future the tire will stop spinning, or for the Nerf bullet, could I predict where the bullet would land? One really interesting idea would be to use a two camera setup to observe a dart in motion and attempt to predict where on the board the dart would land.

For those that are interested, I’ve uploaded the Matlab code for the Kalman filter and the Extended Kalman filter along with the measured data, you can find the files here.