Exciting change is on the way! Please join us at nsf.gov for the latest news on NSF-funded research. While the NSF Science360 page and daily newsletter have now been retired, there’s much happening at nsf.gov. You’ll find current research news on the homepage and much more to explore throughout the site. Best of all, we’ve begun to build a brand-new website that will bring together news, social media, multimedia and more in a way that offers visitors a rich, rewarding, user-friendly experience.

Want to continue to receive email updates on the latest NSF research news and multimedia content? On September 23rd we’ll begin sending those updates via GovDelivery. If you’d prefer not to receive them, please unsubscribe now from Science360 News and your email address will not be moved into the new system.

Thanks so much for being part of the NSF Science360 News Service community. We hope you’ll stay with us during this transition so that we can continue to share the many ways NSF-funded research is advancing knowledge that transforms our future.

For additional information, please contact us at NewsTravels@nsf.gov

Top Story

Machines that learn language more like kids do

Children learn language by observing their environment, listening to the people around them and connecting the dots between what they see and hear. Among other things, this helps children establish their language's word order, such as where subjects and verbs fall in a sentence. In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics. In a new paper, National Science Foundation-funded researchers describe a parser that learns through observation to more closely mimic a child's language-acquisition process, which could greatly extend the parser's capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it's learned about the structure of the language to accurately predict a sentence's meaning, without the video.

Visit Website | Image credit: MIT News