Saturday, September 6, 2008

video talks

01:17:25 From: UserGroupsatGoogle

01:00:58 From: googletechtalks

26:59 From: pycon08

Google Tech Talks February, 28 2008 
Added: 6 months ago
Views: 1,491
55:59

ABSTRACT

Treebank parsing can be seen as the search for an optimally refined grammar consistent with a coarse training treebank. We describe a method in which a minimal grammar is hierarchically refined using EM to give accurate, compact grammars. The resulting grammars are extremely compact compared to other high-performance parsers, yet the parser gives the best published accuracies on several languages, as well as the best generative parsing numbers in English. In addition, we give an associated coarse-to-fine inference scheme which vastly improves inference time with no loss in test set accuracy.

Slides: http://www.eecs.berkeley.edu/~petrov/...

Speaker: Slav Petrov
Slav Petrov is a Ph.D. Candidate at University of California Berkeley Dept of Computer Science, where he is also a research assistant working with Dan Klein and Jitendra Malik on inducing latent structure for perception problems in vision and language. 



Google Tech Talks April, 17 2008 ABSTRACT Modeling human sentence-processing can help us (more)
Added: 4 months ago
Views: 7,892
49:35

xABSTRACT
Modeling human sentence-processing can help us both better understand how the brain processes language, and also help improve user interfaces. For example, our systems could compare different (computer-generated) sentences and produce ones that are easiest to understand.
Modeling human sentence-processing can help us both better understand how the brain processes language, and also help improve user interfaces. For example, our systems could compare different (computer-generated) sentences and produce ones that are easiest to understand.
Modeling human sentence-processing can help us both better understand how the brain processes language, and also help improve user interfaces. For example, our systems could compare different (computer-generated) sentences and produce ones that are easiest to understand.
I will talk about my work on evaluating theories about syntactic processing difficulty on a large eye-tracking corpus, and present a model of sentence processing which uses an incremental, fully connected parsing strategy.
Speaker: Vera Demberg
Vera Demberg is a Ph.D. student in Computational Linguistics from the University of Edinburgh, Scotland. Her research focus is on building computational models of human sentence processing.
Vera obtained a Diplom (MSc) in Computational Linguistics from Stuttgart University, and a MSc in Artificial Intelligence from the University of Edinburgh. She has published papers in a number of top venues for language processing and psycholinguistic research, including ACL, EACL, CogSci and Cognition.
For her PhD research, she's been awarded the AMLaP Young Scientist Award for best platform presentation by a junior scientist. She was a finalist for the Google Europe Anita Borg Memorial Scholarship in 2007.x

Short videos

08:52 From: lingosteve
02:20 From: lingosteve

Python

01:40:15  googletechtalks

01:06:41  From: googletechtalks

Cognitive Science

01:37:42 From: googletechtalks

Added: 8 months ago
Views: 9,573
01:02:13
ABSTRACT

Neurocomputational models provide fundamental insights towards
understanding the human brain circuits for learning new associations
and organizing our world into appropriate categories. In this talk I
will review the information-processing functions of four interacting
brain systems for learning and categorization:

(1) the basal ganglia which incrementally adjusts choice behaviors using environmental
feedback about the consequences of our actions,

(2) the hippocampus which supports learning in other brain regions through the creation of
new stimulus representations (and, hence, new similarity
relationships) that reflect important statistical regularities in the
environment,

(3) the medial septum which works in a feedback-loop with
the hippocampus, using novelty-detection to alter the rate at which
stimulus representations are updated through experience,

(4) the frontal lobes which provide for selective attention and executive
control of learning and memory.

The computational models to be described have been evaluated through a variety of empirical
methodoligies including human functional brain imaging, studies of
patients with localized brain damage due to injury or early-stage
neurodegenerative diseases, behavioral genetic studies of
naturally-occuring individual variability, as well as comparative
lesion and genetic studies with rodents. Our applications of these
models to engineering and computer science including automated anomaly
detection systems for mechanical fault diagnosis on US Navy
helicopters and submarines as well more recent contributions to the
DoD's DARPA program for Biologically Inspired Cognitive Architectures
(BICA).

Speaker: Dr. Mark Gluck
Mark Gluck is a Professor of Neuroscience at Rutgers University - Newark, co-director of the Rutgers Memory Disorders Project, and publisher of the public health newsletter, Memory Loss and the Brain. He works at the interface between neuroscience, psychology, and computer science, where his research focuses on the neural bases of learning and memory, and the consequences of memory loss due to aging, trauma, and disease. He is the co-author of "Gateway to Memory: An Introduction to Neural Network Models of the Hippocampus and Memory " (MIT Press, 2001) and a forthcoming undergraduate textbook, "Learning and Memory: From Brain to Behavior." He has edited several other books and has published over 60 scientific journal articles. His awards include the Distinguished Scientific Award for Early Career Contributions from the American Psychological Society and the Young Investigator Award for Cognitive and Neural Sciences from the Office of Naval Research. In 1996, he was awarded a NSF Presidential Early Career Award for Scientists and Engineers by President Bill Clinton. For more information,



Miscellaneous

Google Tech Talks January, 29 2008 ABSTRACT IPv6 and the DNS Speaker: Suzanne Woolf (more)

[TRANSLATED] jQuery
Google Tech Talks April, 3 2008 ABSTRACT jQuery is a JavaScript library that stand (more)
Added: 5 months ago
Views: 66,703
01:00:37
June 4, 2008


Google Tech Talks June 4, 2008 ABSTRACT In software engineering, aspects are concerns t (more)
Added: 3 months ago
Views: 2,535
3.5
40:12
ABSTRACT

In software engineering, aspects are concerns that cut across multiple modules. They can lead to the common problems of concern tangling and scattering: concern tangling is where software concerns are not represented independently of each other; concern scattering is where a software concern is represented in multiple remote places in a software artifact. Although aspect-oriented programming is relatively well understood, aspect-oriented modeling (i.e., the representation of aspects during requirements engineering, architecture, design) is still rather immature. Although a wide variety of approaches to aspect-oriented modeling have been suggested, there is, as yet, no common consensus on how aspect-oriented models should be captured, manipulated and reasoned about. This talk presents MATA (Modeling Aspects Using a Transformation Approach), which is a unified way of handling aspects for any well-defined modeling language. The talk will argue why MATA is necessary and highlight some of the key features of MATA. In particular, the talk will motivate the decision to base MATA on graph transformations and will describe an application of MATA to modeling security concerns.

Speaker: Jon Whittle
Prof. Jon Whittle joined Lancaster University in August 2007 as a Professor of Software Engineering. Previously, he was an Associate Professor at George Mason University, Fairfax, VA, USA, and, prior to that, he was a researcher and contractor technical area lead at NASA Ames Research Center. In July 2007, he was awarded a highly prestigious Wolfson Merit Award from the Royal Society in the UK. Jon's research interests are in model-driven software development, formal methods, secure software development, requirements engineering and domain-specific methods for software engineering. His research has been recognized by a number of Best Paper awards, including the IEE Software Premium prize (with João Araújo). He is Chair of the Steering
Committee of the International Conference on Model-Driven Engineering, Languages and Systems
and has been a program committee member of this conference since 2002 (including experience track PC chair in 2006). He has served on over 30 program committees for international conferences and workshops.
He is an Associate Editor of the Journal of Software and Systems Modeling. Jon has also been a guest editor of the IEEE Transactions on Software Engineering, the Journal of Software Quality, and has co-edited two special issues of the Journal of Software and Systems Modeling. 


browsed googletechtalks until 300 (WINE conf 2007)

No comments: