How to exploit knowledge to make better use of data is a timely issue at the crossroad of knowledge representation and reasoning, data management, and the semantic web.
Knowledge representation is emblematic of the symbolic approach of  Artificial Intelligence based on the development of explicit logic-based models processed by generic  reasoning algorithms that  are founded in logic. Recently, ontologies have evolved in computer science as computational artefacts to provide computer systems with a conceptual yet computational model of a particular domain of interest. Similarly to humans, computer systems can then base decisions on reasoning about domain knowledge. And humans can express their data analysis needs using terms of a shared vocabulary in their domain of interest or of expertise.
In this talk, I will show how reasoning on data can help to solve in a principled way several problems raised by modern data-centered applications in which data may be  ubiquitous, multi-form, multi-source and musti-scale. I will also show how knowlege representation formalisms and reasoning algorithms have evolved to face scalability issues and data quality challenges.
One of the biggest challenges of the digital age is to turn human knowledge, know-how and procedures into software. When the domain of expertise is close enough to computer science, software engineers typically can manage it because they can understand the full spectrum of the problems, ranging from the problem domain (how to do the right thing) to the solution space in the computer (how to do the thing right).
However, when the domain stands far away from the software engineer's expertise, it is much more difficult to do the right thing. A lot of approaches have been developed over the years to handle this gap. In this talk we reflect on one of these approaches, based on the idea of using models to capture domain knowledge at the right level of abstraction, and software tools to transform these models into technical solutions. In this paradigm, the mission of software engineers becomes providing the domain experts with the right tool-supported modeling languages that is, turning domain knowledge into tools.
Long before personal computers, the Internet and smartphones, Human-Computer Interaction (HCI) was already at the heart of some of the visions that have shaped modern computing. But for years, priority has mostly been put on intrinsic power and development of features rather than how to use them. The popularization of digital devices such as smartphones, tablets or gaming consoles slowly reversed this trend and the argument of simplicity of use has replaced that of the intrinsic power. But it also led to a relative impoverishment of the possibilities offered by technologies that are paradoxically more powerful than ever. By hiding complexity rather than helping to master it, by keeping the myth alive that such devices make it possible to do a lot without efforts, the trend is now to sacrifice empowerment of users for simplicity of use.
Formal methods, rooted  in  logic and reasoning, traditionally aim to provide guarantees that systems behave correctly, thanks to verification technologies (based on concepts of model, computation, deduction, constraint solving). They strongly contribute to ensure safety, security and accountability of software and hardware systems. These guarantees must be addressed in the context of actual and future cyber systems where  machine learning techniques and autonomous decisions are expanding.Â
Prospects in formal methods will be examined and challenges will be proposed, focusing on cybersecurity issues.
Synchrony, engagement and learning are important abilities that allow sustaining dynamics of social interaction. In this talk, we will address these topics with an interpersonal interaction point of view. In particular, we will introduce interpersonal human-machine interactions schemes and models with a focus on definitions, sensing and evaluations of social signals and behaviors. We will show how these models are currently applied to detect engagement in multi-party human-robot interactions, detect human’s personality traits and task learning.