[报告取消]学术报告:驾驶模型检测(Model inspection for driving)-武汉大学计算机学院

[报告取消]学术报告:驾驶模型检测(Model inspection for driving)

发布时间:2023-04-17     浏览量:

报告题目:驾驶模型检测(Model inspection for driving)

报告时间: 2023年4月19日14:00

报告地点:计算机学院大楼B404

报告人:Patrick Pérez博士

报告人国籍:法国

报告人单位:世界500强企业法雷奥公司

 

报告人简介:Patrick Pérez 法资世界500强企业法雷奥公司人工智能副总裁兼 valeo.ai 的科学总监,valeo.ai是一个专注于法雷奥汽车应用,尤其是自动驾驶汽车的人工智能研究实验室。在加入法雷奥之前,Patrick Pérez 曾在 Technicolor (2009-2018)Inria (1993-20002004-2009) 和微软剑桥研究院 (2000-2004) 担任研究员。他的研究范围包括多模态场景理解和计算成像。

Patrick Pérez 硕士毕业于巴黎中央理工学院,博士在雷恩大学攻读信号处理专业。

Patrick Pérez 曾在法国回声报(Les Echos)、EEE / CVF 计算机视觉和模式识别会议 (CVPR)发表多篇文章。曾在三星、牛津大学、法兰西学院等高校、公司发表演讲。在布拉格举行的捷克-法国国家AI 研讨会上发表主题演讲、小组讨论和主席。

Bio: Patrick Pérez is Valeo VP of AI and Scientific Director of valeo.ai, an AI research lab focused on Valeo automotive applications, self-driving cars in particular. Before joining Valeo, Patrick Pérez was a researcher at Technicolor (2009-2018), Inria (1993-2000, 2004-2009) and Microsoft Research Cambridge (2000-2004). His research interests include multimodal scene understanding and computational imaging.

Patrick Pérez graduated from the Ecole Centrale Paris with a master's degree and a Ph.D. in signal processing at the University of Rennes.

Patrick Pérez has published several articles in Les Echos, EEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), etc. He has given speeches in universities and companies such as Samsung, Oxford University, and Collège de France. Keynote speech, panel discussion and chair at the Czech-French National AI Symposium in Prague.

报告摘要:从感知到决策,驱动堆栈严重依赖经过训练的模型,这引发了关键的可靠性问题。提高可靠性可以采取多种形式,其中大部分仍处于研究阶段。在本次演讲中,我将回顾 Valeo.ai 最近的工作,旨在以各种方式检查目标模型。我们将看到如何学习辅助模型,例如,预测识别模型输出的置信度,或解释端到端驾驶模型的决策。我们还将讨论基于视觉的驾驶模型的反事实解释的生成,以深入了解其推理和可能的偏见。

Abstract: From perception to decision, driving stacks rely heavily on trained models, which raises crucial reliability issues. Improving reliability can take many forms, most of them still at the research level. In this presentation, I will survey recent works at Valeo.ai aimed at « inspecting » in various ways a target model.  We shall see how an auxiliary model can be learned, for instance, to predict the confidence of a recognition model’s output, or to explain the decision of an end-to-end driving model. We shall also discuss the generation of counter-factual explanations for a vision-based driving model, to get insights into its reasoning and possible biases.

邀请人:玄跻峰