Category Archives: Signal Processing

Garage Door Hack

One field of communication and DSP that has always intrigued me is signal intelligence, the attempt to receive and decode an unknown signal. It’s an interesting puzzle that requires a wide variety of hardware, software, and analytical skills. I like to think that my research is, at least tangentially, a form of signal intelligence in that I’m attempting to use ultrasound signatures to determine what is happening inside the body. While receiving and interpreting ultrasound signals is interesting, I had a desire to look at more artificial, data bearing signals. I decided I would try to receive and decode the signal my garage door remote transmits to the base unit to open and close the door. In addition, I also wanted to create a device that would mimic my remote and be able to open the garage door on my command. This post will detail that attempt.

Remote Signal

The first task in this project required determining at what frequency my garage door remote operated. This was actually quite simple since the FCC ID# was easily visible. With a quick check of the FCC website, I was able to determine that the remote operated at 390 MHz. While not a super high frequency signal, it was beyond the bandwidth of my previous SDR. That SDR used an ADC with an integrated analog front end which limited its bandwidth to a few dozen MHz. So, partly driven by this project as well as a few other projects I am working on, I decided to design a new SDR with a wide bandwidth front end, and a high-resolution, high-speed ADC to allow for the reception of just these types of signals.

I plan to detail the design and construction of this new ADC board (along with a companion high-speed DAC board) in a later post. For now, I’ll just give a few of the critical details. In the image shown below, you can see the ADC on the left. This is a 14-Bit, 125 MSPS Analog Devices, AD9445 ADC with LVDS outputs. The low-jitter clock can be seen just below the ADC. To the right of the ADC is a Xilinx Spartan 3E chip. This is a big step down from the Virtex5 used in the previous post but given the costs associated with the Virtex5 (both the chip itself and the required PCB and assembly fees) I think it was well worth it. In addition, the Spartan3E can be programmed with the Xilinx ISE WebPACK without an additional license. As a student at the University of Minnesota, I currently have access to a full version of the ISE Design Suite, but at some point in the future I may not have this luxury so this was a big issue for me. Below the Spartan3E board there is the JTAG programming port, some LEDs, and a few inputs for clocks and triggers. Along the far right is the same USB module used in the 3D magnet localization project and allows me to quickly stream the data back to the computer.

Software Defined Radio SDR ADC Board

The analog front end on this board consists only of a single RF transformer. This allows the passage of signals with frequency content well into the hundreds of MHz. I’m able to use the ADC in an under-sampling mode to still digitize these high frequency signals. For the purposes of this project, I created an IQ demodulator that looked at a ~250 KHz swath of bandwidth and streamed the complex baseband signal back to the computer. I hooked up a small whip antenna to collect the RF transmissions from the remote and then monitored at 15 MHz (the 390 MHz signal is in the 7th Nyquist zone of the ADC and, with aliasing, would appear at 15 MHz).

The plot below shows the envelope of the signal I received when I pressed the open/close button on the remote. The remote was positioned very close to the whip antenna to ensure reception.

Garage Door Remote Signal

I had been worried that perhaps the remote signal would have a very low SNR and make it difficult to decode, but that was not the case. The plot below focuses on the first burst of data.

Garage Door Remote Signal

It’s clear that the remote is using simply On-Off Keying to transmit 83 bits of data. In fact, while the open/close button is depressed, the remote continuously transmits two frames of data 83 bits long, separated by 100 ms. These two frames of data are labeled A and B in the first plot above. Closer inspection revealed that the bits are transmitted at an approximate data rate of 2000 bits per second.

To get an idea of how the codes change with each button press, I recorded a long data set where I repeatedly pressed the open/close button. The hexadecimal representation of those received codes is shown below with each row representing the full code received for that button press.

Consecutive Remote CodesA few things are immediately obvious when looking at the above codes. The first frame appears to only use the numbers 8, 9, and 11 for each nibble. The second frame appears to only use the 2, 6, and 14 for each nibble.  While the codes don’t appear to be completely random (e.g. the last byte of the first frame is always 128, and the first nibble of the second frame is always 14), to my eyes, there isn’t an easily discernible pattern in this small code sample. This makes sense since most modern garage door openers use a rolling code system that transmits a new random number with each button press. The main garage door unit and the remote are synchronized so the main unit knows which codes to expect. The main unit will not only look for the current code, but also the next hundred or so codes to ensure the remote and main unit do not fall out of alignment due to the remote being pressed while out of range.

I was happy with these results and the overall performance of the SDR in receiving and decoding the remote signal, however my final goal remained using these results to create a device capable of mimicking my remote and fooling the main unit into opening and closing. The garage system’s use of the rolling code made it impossible for me to predict codes into the future. I decided the best thing (only thing) I could do would be to take the remote out of range of the main unit and record the next dozen or so codes, creating a code library that I could then replay to the main unit to open and close the door. Before doing this, however, I needed to design a transmission module that could generate a OOK data stream with a 390 MHz carrier.

Transmission Module

The graphic below shows the board I ended up making to generate the needed RF signal. The signal generation starts with a voltage controlled oscillator (VCO) whose frequency can be tuned between approximately 350 MHz and 410 MHz (shown on the far left of the board with a potentiometer to control the frequency). The output of the VCO is fed to an RF switch whose output is connected to a whip antenna. The digital logic input of the RF switch is used to generate the on-off keying. In addition to the whip antenna output, I also included an SMA output to help with the debugging.

OOK Transmission ModuleThe first thing I needed to do with this board was set the potentiometer controlling the frequency to the right voltage to ensure the carrier is at 390 MHz. I used the debug output on the far right of the board to connect the VCO directly to my ADC board. I measured both the control voltage and the VCO frequency as recorded by the ADC to generate the plot shown below. This data was collected at the full sampling rate for a bandwidth of 62.5 MHz.

Voltage Controlled Oscillator VCO

This plot demonstrates the aliasing present in under-sampling. What was in reality a monotonic increase in frequency, appears to first decrease then increase. Since I know what is happening I can correct for this aliasing and generate the correct frequency vs. voltage plot, as shown below.

Voltage Controlled Oscillator VCO

We can see the VCO has an approximate 14 MHz/Volt sensitivity, meaning the potentiometer should be set to give a control voltage of about 3.5 volts.

One VCO related thing I found interesting was the sensitivity to temperature. I ran a quick experiment where I monitored the VCO frequency as I placed my finger on the case. The spectrogram below shows how quickly the frequency changes with just that small thermal influence.

Frequency Shift

The final step in the transmission module was generating the logic signal to control the RF switch. I felt the easiest method to create this signal would be with an FPGA. I essentially needed an SPI port which can handle a very large data word, the flexibility of an FPGA made this a breeze. The only issue I ran into was that the data rate of the remote is slightly slower than the 2k baud I initially thought. Upon closer inspection, the bit periods were about 504 us and the time between frames was more like 100.4 ms. With this small correction I was able to accurately mimic the pass band signal generated from the garage remote. The data below shows the remote data signal overlain with my synthesized signal as recorded by the SDR board.

Remote Code vs. FPGA Code

The above graphic shows the first frame with the remote signal in blue and the synthesized signal in red. Over the 40 ms they stay nicely aligned. The graphic below shows the second frame comparison.

Remote Code vs. FPGA Code

By the end of this frame there is a slight misalignment between the FPGA and the remote, but it does not appear to be significant.

So with all the components up and running and the library of codes built up, it was just a matter of connecting everything together in my garage and hoping for the best. I’ve posted a video of me testing out the system below. The FPGA is connected to the transmission module with a BNC cable. The red wire running off the transmission module is the whip antenna used to transmit the signal to the garage door opener above. The slide switches located along the right side of the FPGA control which code from the library is played out.

Thoughts on Garage Remote Security

Working through this project has caused me to reconsider how secure my garage is to potential intruders. There seem to be some very serious flaws with how the rolling codes are implemented. The rolling codes are meant to do two things; one, they should prevent someone from recording the transmitted signal and being able to replay that signal at will to open or close the garage door. Two, they should prevent someone from using a recorded code to predict future codes. I don’t know enough about the algorithms used to create the rolling codes to have an opinion on the latter issue, but from this brief look at how the remote behaves there appear to be some serious flaws with the former.

The original garage remotes had a series of dip switches that could be toggled to create a unique ID number that would identify each remote to its base unit. This would allow neighbors to have the same make and model of garage door opener without interfering with each other. The downside to this implementation is that an intruder only needs to know your dip switch settings to craft a signal capable of opening your garage. If the intruder didn’t have physical access to the remote, he or she could just record the remote transmission and replay this to gain access. To counter this weakness, rolling codes were introduced. With rolling codes, by the time you record the transmission, the code is already out of date and useless. Or at least it should be…

Look at how this remote behaves when the open/close button is pressed. As long as the button is depressed, both code frames are transmitted continuously, resulting in multiple transmissions of the current code for even brief button presses. Now imagine you’re an intruder with the capability to record and transmit codes in real-time (a system that could easily be constructed for less than $300…) and you want to break into my garage. You could wait for me to come home and activate my garage door opener. The moment I press the button my remote begins transmitting the first frame of the current code (let’s call this code number 100). Now your device detects the transmission at the same moment my main unit does and you both begin decoding the frame. After 50 ms both your device and the main unit have decoded and stored the first frame of code 100, now instead of just passively waiting for the second frame, your device begins actively transmitting random bits. Your device continues this transmission for the next 100 ms thereby disrupting the ability of the main unit to receive the second code and open the garage door. If the remote only transmitted a code one time, this would be the end of the story, the intruder would have managed to record frame 1 of code 100 and disrupted the proper operation of the overall system, but the security of the garage hasn’t been compromised. The intruder would just be able to create a nuisance and force me to get out and manually activate my garage door. However, this isn’t the case.

After the intruder disrupts the second frame of code 100, the remote will retransmit the entire code. On the remote’s second transmission of code 100, the intruder could reverse what he did previously. He could disrupt frame 1 while recording frame 2. The intruder now has the entirety of code 100, but the main garage door unit has been prevented from receiving the correct code. The intruder can continue to transmit random bits as long as it detects the remote is actively transmitting code 100.

As the person pressing the remote button, all I would notice is that the garage door did not respond to my remote. This happens to me all the time so I would not think too much of it, I would simply press the remote button again. Pressing the remote again would cause the transmission of the next code, code 101. And here is where the intruder would be able to cover his tracks. The intruder’s system could go through the exact same steps as before, recording frame 1, then frame 2 of code 101. At the end of the transmissions, the intruder would have both codes 100 and 101 and the main unit would still be waiting for code 100. Instead of just staying silent after the transmission of code 101, the intruder could immediately play out code 100 causing the garage door to open. I would think nothing of this whole experience and continue on as if nothing happened. The problem is that now the intruder’s system knows code 101 and the main garage unit is expecting code 101.

I can’t say I’m going to lose any sleep over the prospect of someone spoofing my garage door opener, but I’m also not going to store something valuable in my garage and expect the rolling code security to protect it.

Moving forward, I would like to spend some time looking into the algorithms used to generate the rolling codes to see how secure they are and if they have any vulnerabilities. I would imagine that encryption for garage doors has its own set of unique constraints if only because of the long time the system will be in use. I think most people have the same garage door opener for many years, possibly decades. So even a brute force attack that took three years to search through all the possible keys would be too weak. Once you cracked the code, the opener would most likely still be in use and vulnerable.

 

 

 

 

Kalman Filtering

As a gift this last Christmas, I received a Canon PowerShot SX230HS. This camera has the ability shoot high-speed video at 120 and 240 frames per second. I knew immediately that I wanted to use this feature to do some physics, tracking, and signal processing experiments. There is nothing that incorporates these three things more than a Kalman filter. This post will detail some of the basic Kalman filtering experiments this camera has allowed me to explore.

The first experiment tracks an object in free fall and uses a Kalman filter to estimate the position, velocity, and acceleration due to gravity. The second experiment looks at a harmonic oscillator consisting of a weight suspended on an elastic band and uses an extended Kalman filter to estimate the damping ratio, the natural frequency, and the band’s spring constant. I’ll post all the data and Matlab M-Files for the filtering for those that are interested at the end of the post.

Camera Calibration

Before diving into the experiments, I thought it would be worthwhile to double check the manufacturer’s claim on the frame rate of the high-speed video modes. The manual states that the camera can operate at both 120 FPS and 240 FPS, however they give no tolerance values on these rates. To check the true high-speed frame rate, I created a simple system that used the FPGA from the previous post, to turn on a series of LEDs for 1 millisecond once every second. I then filmed these lights in the high speed mode and used the lights as a 1 second time-stamp.

In the picture below you can see the images of the FPGA immediately before and during the LED illumination. To identify the frame which contained the illuminated LEDs, I integrated the total energy in the white box, shown below, for each frame.

The graphic below shows the integrated energy from frame to frame while shooting at 240 FPS. The LEDs are able to provide a very high SNR time mark that allows for easy identification. It’s interesting to note that every fifth second, there appears to be a slight dip in the total energy. I believe that in this case, the 1 ms illumination is actually straddling two frames, so the energy is spread compared to the other cases.

To help visually track the actual frame rate, I reshaped the 1D plot above, into a 2D image with columns of length 240. If the frame rate were truly 240 FPS, the 2D image should have a single bar horizontally across the image, indicating that 240 frames represents 1 second exactly. As you can see below, the line is not horizontal, and instead indicates that the frame rate is slightly higher than 240 FPS. Also, this image makes it clear that the dips in energy seen above are due to the illumination spreading across two frames.

To get a better idea of the exact frame rate, I plotted the deviation from an ideal 240 FPS below. There is a clear linear trend, with a slope of approximately 0.386 Frame/Sec. This indicates that the true frame rate of the camera, shooting at the supposed 240 FPS, is 240.386 FPS. I performed a similar check at the 120 FPS and found the rate to be about 120.12 FPS.

Armed with an accurate estimate of the true time between frames, I was then able to proceed with the data collection and tracking.

Falling Weight

The setup for the first experiment consisted of the camera mounted on a tripod filming a weight falling in front of a wall. The wall contained a measuring stick to allow me to make accurate measurements of the distance traveled. An image from the movie is shown below, with the weight in free fall.

The tracking was done in Matlab and consisted of very simple blob detection. The goal of this experiment was not to test out computer vision algorithms, it was to use the high-speed camera to get tracking data, so I used a very simple algorithm to detect the weight. The displacement data could have just as easily come from me manually identifying the location of the weight from frame to frame. One thing to note, I waited a few frames after the weight left my hand to begin tracking. This ensured that there were no unmodeled forces on the weight, but also introduced a slight initial velocity to the object.

The plot below shows the measured displacement along with a quadratic fit to this line. As you can see, the data is quite clean and the fitted line is very close to what I would expect for this setup. The thing to remember about this fit, is that I had to wait until all the data was collected and then perform the least squares fitting, the Kalman filter allows this calculation to be done recursively with each new sample.

Kalman Filter Tracking

The first step in using the Kalman filter is identifying the model to represent the system. I chose to model the falling weight as a mass moving with constant acceleration. This model ignores the effects of air resistance, but for a free fall of this sort distance in still air, this model seems reasonable. The continuous state-space model is shown below with w(t) representing a random acceleration, i.e. the process noise.

The representation shown above is not the only way to model an object in free fall. Since the acceleration due to gravity on Earth is well known and constant, I could incorporate this into the model and reduce the state-vector to 2 dimensions. Instead, I chose to augment the state vector and allow the filter to converge to the acceleration. This convergence will provide a check on the overall performance of the filter.

For this model to be of use in the Kalman filter, it must be moved from the continuous representation into a discrete form. This requires finding the state-transition matrix that will propagate the current state to the state at the next sampling interval. This calculation is shown below.

In addition to the state-propagation, we must also compute the process noise covariance matrix. This requires a matrix integral, but for this 3×3 state-transition matrix, the math is not too difficult. The graphic below shows this calculation.

With these matrices in hand, we can essentially plug and chug with the Kalman filter equations to process the data. The actual code to track the data with the Kalman filter is very straightforward and consists of just 9 lines of m-code (including code to save the variables). I’ve included the entire m-file at the end of the post, but I thought I would also post the core Kalman code below.

Kalman Filter Matlab CodeIn the code above, A represents the state-transition matrix, D represents the measured displacement, H is the measurement matrix ([1 0 0]), and R is the measurement noise (estimated from the data). What’s not shown above is the initialization of the X and P vectors. The initial diagonal elements of the covariance matrix P give an indication in the confidence of the initial state vector X. A small value in P represents higher confidence in the initial value, while a large value represents less certainty. For this experiment I set the initial values of P quite large (compared to the initial X values). If the chosen model is a good fit for the process, the covariance matrix P should converge indicating the system is incorporating the measurements with the model to reduce the uncertainty in the state estimate.

Results

The Kalman filter code above will produce and store three pairs of variables, the state estimates along with their covariances, as well as the Kalman gain at each step. The state variables and associated variances will allow us to inspect both the current estimate along with the uncertainty present in that estimate. The Kalman gain will allow us to look at how the filter is incorporating new measurements. Initially, we should expect that the Kalman gain is relatively high, (i.e. the filter needs to integrate new information) but as time progresses, if the filter is accurately modeling the system, the Kalman gain should fall, indicating the a-priori measurements are accurately predicting the next state.

The plot of the estimated displacement with time is shown below. It’s difficult to make out, but there is a blue line representing the displacement and a dashed black line above and below the displacement representing the std. deviation of the position error. The minimal error present in this state variable throughout the entire filter duration is due to the high SNR measurement used as input to the filter. The filter is balancing the error from the measurement with the error from the model, but it will never produce an estimate with more error than the measurement.

The next plot shows the estimated velocity with time and begins to demonstrate the ability of the Kalman filter to reduce the error in the state estimate. I chose an initial velocity estimate of 0 m/s with a std. deviation of 5 m/s. As the filter took in more data, the estimated error quickly converged, this can be visualized with how tightly the dashed line borders the estimate.

The final state variable, the acceleration, allows some independent confirmation that the filter is accurately modeling the system dynamics. Where as the velocity plot above seems good given the convergence of the error, it is difficult to know if the estimated velocity represents the true velocity of the weight. The acceleration, on the other hand, should converge to the well known 9.8 m/s^2. As you can see in the plot below, this is exactly what happens.

Kalman Filter EstimateThe acceleration was initialized at 2 m/s^2 with a std. deviation of 10 m/s^2. The final estimate of the acceleration was approximately 9.82 m/s^2 with a std. deviation of approximately 0.2 m/s^2. So this is inline with what we would expect with the acceleration estimate.

The final aspect of the filter operation that should be inspected is how the Kalman gain varies with time. The plot below shows the Kalman gains for the individual states vs. time.

Kalman Filter Gain

Initially the Kalman gains are relatively high, but as time progresses and the state estimate and model are able to better predict the system and the gains begin to fall.

I think it is worthwhile to look at how the Kalman velocity and acceleration estimates compare with numerical estimates of the first and second derivatives of position. I’m going to use a finite difference approximation to the derivative instead of something like a Savitzky-Golay filter since this will offer a derivative estimate with minimal delay. The first plot below shows the numerical derivative approximation of the velocity.

This velocity estimate is clearly more noisy than the Kalman estimate, but the overall trend is still clear and correct. It’s interesting to look at what happens between 300 and 350 ms, there is a clear disruption in the data which causes the velocity estimate to jumpy wildly. This disruption has minimal impact on the Kalman velocity estimate, which is to be expected since by that point the filter is relying more on the a-priori estimates as evidenced by the low estimate error and Kalman gain.

The plot below shows the numerical derivative estimate of the acceleration. This estimate is wildly inaccurate and serves as a nice comparison to the much more accurate Kalman estimate.

I’ve created a small video incorporating the Kalman filtering results above and also showing the high-speed video used to perform the tracking. In doing all the data processing to get these results, I sort of lost track of how quickly everything happens. It is one thing to see a plot with a duration of 400 ms, but it’s another to watch the convergence in real-time or even slowed down 6x as in the video.


 

The video shows the weight falling three times. The first time shows the raw video slowed down 6 times. The second repetition overlays a green dot on the tracked position of the weight. The final repetition shows the online convergence of the acceleration state parameter.

Extended Kalman Filter

After fooling around with the Kalman filter, I wanted to explore the extended Kalman filter (EKF) a little. An EKF is an extension of the Kalman filter that can handle non-linear system models. A common use of the EKF is for model parameter identification. For any model of relative complexity there are usually a variety of constants that must be known (e.g. mass of an object, coefficient of friction, temperature of surroundings, etc…). When these are known constants they can be used to populate the state-transition matrix resulting in a linear update equation. If, however, these parameters are not known, the state estimate vector can be augmented with the unknown parameters. This allows the filter to attempt to estimate the parameters, but it introduces non-linearity into the system since now the state-transition matrix is dependent on the state. It’s this idea of using the EKF to identify model parameters that I wanted to explore.

The specific model I decided to use was a harmonic oscillator, specifically a weight bouncing on an elastic string. The rationale behind this was two fold. One, there aren’t many systems that have such a nice physical model and can be implemented as easily as a harmonic oscillator. Two, one of my lab mates, Dalong Liu, published a great paper on using an EKF to estimate tissue viscoelastic parameters by modeling a tissue construct as a forced harmonic oscillator excited by an acoustic radiation force pulse. This work, and also more recent work done by Mehul Soman, has given me some familiarity with using EKFs for parameter identification.

The general setup of this experiment is very similar to the previous setup. A camera was mounted to film a weight bouncing on an elastic string in front of the same wall as the previous experiment. To begin the oscillations, I pulled the weight down from its natural resting location and released it. I attempted to pull the weight straight down with as little side-to-side movement as possible. The tracking used the same blob detection as before, but only measured the vertical displacement. I made no attempt to record or model the side-to-side movement of the weight throughout its oscillations, instead, these deviations are incorporated into the process noise of the system. The graphic below shows a single frame from the high-speed movie.

From this video, I was able to obtain the following displacement data. Here again, the data is of relatively high quality and the motion, at least qualitatively, appears to be representative of a harmonic oscillator. It’s clear that the motion is under-damped with a period of just over a second.

The differential equation representing a harmonic oscillator is well known and shown below. Here the process noise is represented by w(t). Given the initial conditions of the oscillator, and knowledge of the damping ratio (xi) and the natural frequency (omega), this formula will describe the subsequent motion.

To help make this equation more tractable, we can reformulate it in its state space representation. If we ignore the process noise for the moment, the state space representation of this systems is shown below. The first element of the state vector represents the position of the object while the second element represents the velocity.

Just like the Kalman filter, the state-space equation here represents the continuous time model. To be of use in the EKF, we must be able to convert this formulation to a discrete model. If we know the natural frequency, the damping ratio, and the sampling interval then we can quickly identify the state-transition matrix as we did above. What I didn’t talk about previously, is that the identification of this state-transition matrix is a computationally intensive process. For the Kalman filter this was not much of an issue since we only needed to compute the matrix once and it could be done offline, with the EKF, things are not that simple. Once we augment the state-vector with the damping ratio and the natural frequency, the state matrix is no long time-invariant and will depend on the current state. This implies, that if we are to use the techniques from above to estimate the state-transition matrix, we will need to raise e to the matrix power at each prediction update. Given how expensive this operation is, it is worthwhile to look at a few alternatives.

Specifically, I looked at two, one, using a first-order Taylor series approximation of the state-transition matrix, and two, using an Runge-Kutta 4th order to propagate the state. To test the performance of these two schemes at propagating the state-vector, I tested them on an ideal harmonic oscillator with a natural frequency of 2*pi*3, and a damping ratio of 0.1. I looked at the ability of these two methods to accurately propagate the state across a 100 ms interval with various time steps. The phase plane for this system is shown below, with the integration results from the Taylor series approximation shown in red and the Runge-Kutta results shown in black. The red dots represent the starting and ending points on the true phase plane trajectory. Theses results here are from breaking the 100 ms interval into 4, 25 ms intervals to perform the integration over.

Harmonic Oscillator Phase Plane

The RK4 method is clearly superior than the first order Taylor series approximation, but it also required more operations. Perhaps, if I decreased the integration step size I could achieve similar performance with the Taylor series approximation, lightening the computational burden? To look at this, I measured the norm of the integration error for both methods with various integration step sizes. The log compressed results are shown below.

The error in the RK4 method very quickly drops to what is functionally equivalent to no error. The Taylor series approximation does not drop off nearly as quickly. In fact, to reach the same level of error present in the RK4 method with only 3 integration steps across the 100 ms, the Taylor series approximation requires just over 600 integration steps. These results justify the additional computation required with the RK4 method.

The other computation required during the prediction step is the covariance update step. Unlike the Kalman filter, where the state-transition matrix was used to update the covariance. In the EKF, we must use the Jacobian of the derivative function f. This is relatively straightforward to calculate and is shown below.

With the state update method chosen and the matrices needed for the covariance update identified, we now have everything in place to process the data. The only remaining step is to choose the initial state vector and the initial covariance estimates. I chose the initial position estimate based on the first analyzed video frame, and the velocity I really just guessed a reasonable value (60 cm/s). For both the position and velocity, I chose very large initial covariance values, 10,000 and 1,000,000 respectively. I also tried to choose reasonable values to initialize the damping ratio and the natural frequency, 0.1 and 3 Rad/S. I also chose large covariance estimates for these values as well, 100 and 1,000.

 The plot below shows the estimated displacement overlaid on the measured displacement. As was the case in the Kalman filter above, this estimate was very much in line with the measured displacement.

Extended Kalman FilterThe high-speed video had a relatively low image size, introducing quantization effects into the measured data. The plot below shows a close up of the measured data and the estimated data. You can clearly see the quantization present in the measured data, but the estimated data does not suffer from these effects.

I’ll skip the velocity state variable and move directly to the damping ratio and the natural frequency. Both of these variables showed nice, fast convergence. The plot of the damping ratio below shows that within half a second, the covariance associated with this state has already converged to a relatively low value. The steady state value of the damping ratio also makes intuitive sense. The displacement plots clearly shows an under damped oscillator, and a damping ratio of ~0.07 is very much under damped.

The next plot shows the estimate of the natural frequency. For reference, the initial value of the natural frequency was 3 Rad/S with a std. dev. of just over 30 Rad/S. Within 1 second the std. dev. of this estimate has decreased to around 1 Rad/S, and continues to decrease throughout the observation period.

It’s actually possible to independently check the validity of the estimated natural frequency. The natural frequency of a harmonic oscillator consisting of a weight on a spring is given by the square root of the spring constant over the mass of the weight. I measured the mass of the weight used in these experiments as approximately 138 grams. Measuring the spring constant took a little improvisation, I used quarters as a standard weight to measure the displacement due to two different forces. I first measured the displacement with no quarters attached to the weight, I then taped 7 quarters to the weight to increase the mass by 39.7 and measured the displacement a second time. Under this load, the measured displacement was  15.2 cm. This gives a spring constant of about 2.5 N/M, giving a natural frequency of about 4.3 Rad/S. This estimate is very close to the estimated frequency of 4.37 Rad/S.

The video below shows the parameter convergence along with the high-speed video at 6 times slower than real-time.

Moving forward, I’d like to use this same data to test out some more advanced Bayesian filters such as the unscented Kalman filter or particle filters. It would be interesting to compare various aspects of these various filters (e.g. overall convergence time, stability, sensitivity to initial guess, etc…). I’m also looking to collect additional data some well behaved dynamic motion. Right now I’m thinking I’ll try to collect some data of a free spinning bicycle tire or a some sort of a projectile (maybe a nerf bullet or a marble launched by a rubber band catapult). I’d like to use these filters to help predict future states as well. For instance, with the bicycle tire, could I predict at what point in the future the tire will stop spinning, or for the Nerf bullet, could I predict where the bullet would land? One really interesting idea would be to use a two camera setup to observe a dart in motion and attempt to predict where on the board the dart would land.

For those that are interested, I’ve uploaded the Matlab code for the Kalman filter and the Extended Kalman filter along with the measured data, you can find the files here.