Four blog posts later and we’ve arrived at the last nostalgia blog post in the series. Ready for the story about the air hockey table that grew a Lynxmotion robot arm and learned how to play? Great! In the fall of 2004, I signed up to take a Video Processing course as a follow-up to the Digital Image Processing (DIP) course I had taken with Dr. Oge Marques (Blog – Twitter). As with the DIP course, the Video Processing coursework included a term project of our choosing. I decided that it would be interesting to extend the functionality of the DIP project somehow but was not sure quite what to do. At some point along the way during the brainstorming, I remembered a robotic air hockey table I had seen when I was touring a college a few years back. That particular robotic air hockey table worked by using sensors embedded in the table surface to locate the puck as it moved and feeding that information to a robot arm. I decided to see if I could simply the idea by replacing the embedded sensors with a cheap USB web camera …
This project represented a few interesting challenges, primarily including:
- How to integrate a temporal dimension to the analysis while providing real-time feedback. The video analysis would be useless if analysis took so long that the robot arm could not respond in time.
- How to mathematically describe movement of an air hockey puck and then use those descriptors to predict future movement.
After a bit of testing, it was determined that mounting the USB webcam at a height of 4 feet 8 inches above the table provided an ample field of view for video capture and analysis. The two images below demonstrate what the air hockey table looked like from the camera’s perspective.
With the camera and robot arm mounted to the table, it was time to develop the algorithms. The final program worked by initializing the camera and taking a reference frame of the air hockey table (seen to the left). After capturing the reference frame, the program would enter a loop and retrieve every 7th frame in the video frame. Image subtraction was used to compare the newly acquired frame against the reference frame. The resulting image would show only the air hockey puck (seen below), which was then used to determine object location. The loop used every 7th frame as a trade-off to allow the system to respond in time. Analyzing every frame provided greater accuracy, but was too computationally expensive and would not provide ample time for the robot arm to respond.
Once sufficient data was acquired, it was fed into the algorithm to calculate location and predict motion, seen below.
As previously mentioned, one of the major obstacles was the amount of time it took to process the video and perform the necessary calculations. The initial revision of the program took upwards of 3 seconds to go from image subtraction to arm movement. With a bit of tweaking and some help from MATLAB’s program profiler (seen below), that time was reduced by nearly half and allowed the system to be more responsive.
Ultimately, the system worked decently well and was capable of playing air hockey at a moderate speed, assuming that the puck did not bounce off the table walls within the camera’s field of view before reaching the arm. Within those boundaries, the arm was capable of defending the goal and returning the puck to the other end of the table.
A video of the system in action is below.
Like this post? Leave a Bitcoin tip!
If you enjoyed reading this post you can leave a Bitcoin tip to say thanks! Each post gets its own unique Bitcoin address so by tipping you're not only making my continued efforts possible but telling me what you liked.