Exciting change is on the way! Please join us at nsf.gov for the latest news on NSF-funded research. While the NSF Science360 page and daily newsletter have now been retired, there’s much happening at nsf.gov. You’ll find current research news on the homepage and much more to explore throughout the site. Best of all, we’ve begun to build a brand-new website that will bring together news, social media, multimedia and more in a way that offers visitors a rich, rewarding, user-friendly experience.

Want to continue to receive email updates on the latest NSF research news and multimedia content? On September 23rd we’ll begin sending those updates via GovDelivery. If you’d prefer not to receive them, please unsubscribe now from Science360 News and your email address will not be moved into the new system.

Thanks so much for being part of the NSF Science360 News Service community. We hope you’ll stay with us during this transition so that we can continue to share the many ways NSF-funded research is advancing knowledge that transforms our future.

For additional information, please contact us at NewsTravels@nsf.gov

Top Story

Helping computers fill in the gaps between video frames

In a recent study, researchers describe an add-on module that helps artificial intelligence systems called convolutional neural networks, or CNNs, to fill in the gaps between video frames to greatly improve the network’s activity recognition. The researchers’ module, called Temporal Relation Network, learns how objects change in a video at different times. It does so by analyzing a few key frames depicting an activity at different stages of the video -- such as stacked objects that are then knocked down. Using the same process, it can then recognize the same type of activity in a new video. In experiments, the module outperformed existing models by a large margin in recognizing hundreds of basic activities, such as poking objects to make them fall, tossing something in the air, and giving a thumbs-up. It also more accurately predicted what will happen next in a video -- showing, for example, two hands making a small tear in a sheet of paper -- given only a small number of early frames. One possible application of the module would be to help robots better understand what’s going on around them.

Visit Website | Image credit: Courtesy of the researchers; edited by MIT News