Schedule

Thanks to the sponsorship of MIT's Center for Brains, Minds and Machines, the SVRHM workshop will be transmitted with Zoom encryption for LIVE presentations to increase interactions with the Speakers through live Q&A. All talks will be recorded and later uploaded to the workshop's and/or CBMM's YouTube Channel.

Link to all the recorded talks: https://cbmm.mit.edu/knowledge-transfer/workshops-conferences-symposia/svrhm-2020

For more updates about the Workshop, please Register here: https://forms.gle/uSu6ox7By7437i8c7

All times in the following schedule are in EST (Eastern Standard Time) for Saturday December 12th, 2020.

10.45 - 11.00: Opening Remarks

Session 1: [Europe & Asia]

  • 11.00 - 11.30: Martin Hebart (Max Planck Institute for Human Cognitive and Brain Sciences) | "THINGS: A large-scale global initiative to study the cognitive, computational, and neural mechanisms of object recognition in biological and artificial intelligence"

  • 11.30 - 12.00: David Mayo (Massachusetts Institute of Technology) | "Characterizing models of visual intelligence"

  • 12.00 - 12.30: Tim Kietzmann (Donders Institute for Brain, Cognition and Behaviour) | "It's about time. Modelling human visual inference with deep recurrent neural networks."

  • 12.30 - 13.00: S.P. Arun (Indian Institute of Science) | "Do deep networks see the way we do? Qualitative and quantitative differences"

  • 13.00 - 13.15: Robert Geirhos (University of Tübingen & International Max Planck Research School for Intelligent Systems) | "On the surprising similarities between supervised and self-supervised models" | [Invited Oral from Submitted Papers]

  • 13.15 - 13.30: Aviv Netanyahu (Massachusetts Institute of Technology) | "PHASE: PHysically-grounded Abstract Social Events for Machine Social Perception" | [Invited Oral from Submitted Papers] | * Facebook Reality Labs Best Paper Award in Breakthrough in Biologically-Driven Generative Models *

13.30 - 14.30: Poster Session 1: Sponsored by MIT Quest for Intelligence

Session 2: [East Coast]

  • 14.30 - 15.00: Grace Lindsay (University College London) | "Modeling the influence of feedback in the visual system"

  • 15.00 - 15.30: Leyla Isik (Johns Hopkins University) | “Social visual representations in humans and machines”

  • 15.30 - 16.00: Carlos Ponce (Washington University in St. Louis) | "As simple as possible, but not simpler: features of the neural code for visual recognition"

  • 16.00 - 16.30: Aude Oliva: (Massachusetts Institute of Technology) | "Resolving Human Brain Responses in Space and Time"

  • 16.30 - 16.45: Salman Khan (University of Waterloo) | "Task-Driven Learning of Contour Integration Responses in a V1 Model" | [Invited Oral from Submitted Papers]

  • 16.45 - 17.00: Melanie Sclar (University of Buenos Aires) | "Modeling human visual search: A combined Bayesian searcher and saliency map approach for eye movement guidance in natural scenes" | [Invited Oral from Submitted Papers] | * NVIDIA Diversity in AI Best Paper Award *

17.00 - 18.00: Poster Session 2: Sponsored by MIT Quest for Intelligence

Session 3: [West Coast]

  • 18.00 - 18.30: Bria Long (Stanford University) | "Parallel developmental changes in children's drawing and recognition of visual concepts."

  • 18.30 - 19.00: Gamaleldin Elsayed (Google Brain) | "Adversarial examples for humans"

  • 19.00 - 19.30: Miguel Eckstein (University of California, Santa Barbara) | "Visual Search: Differences between your Brain and Deep Neural Networks"

  • 19.30 - 20.00: Alyosha Efros (University of California, Berkeley) | "Why it pays to study Psychology: Lessons from Computer Vision"

  • 20.00 - 20.10: Concluding Remarks, Diversity in AI Best Paper Award (NVIDIA Titan RTX) Ceremony and Oculus Quest Award for Breakthrough in Biologically inspired Generative Models.

Interested in having your institution sponsor an Award? Contact us : svrhm2020 [at] gmail.com