Formal methods, rooted  in  logic and reasoning, traditionally aim to provide guarantees that systems behave correctly, thanks to verification technologies (based on concepts of model, computation, deduction, constraint solving). They strongly contribute to ensure safety, security and accountability of software and hardware systems. These guarantees must be addressed in the context of actual and future cyber systems where  machine learning techniques and autonomous decisions are expanding.Â
Prospects in formal methods will be examined and challenges will be proposed, focusing on cybersecurity issues.
Synchrony, engagement and learning are important abilities that allow sustaining dynamics of social interaction. In this talk, we will address these topics with an interpersonal interaction point of view. In particular, we will introduce interpersonal human-machine interactions schemes and models with a focus on definitions, sensing and evaluations of social signals and behaviors. We will show how these models are currently applied to detect engagement in multi-party human-robot interactions, detect human’s personality traits and task learning.
Nowadays, distributed systems are more and more versatile. Computing units can join, leave or move inside a global infrastructure. These features require the implementation of dynamic systems that can cope autonomously with changes in their structure. It therefore becomes necessary to define, develop, and validate distributed algorithms able to manage such dynamic at a large scale.Â
Failure detection is a prerequisite to failure mitigation and a key component to build distributed algorithms requiring resilience. Â We introduce the problem of failure detection in asynchronous network where the transmission delay is not known. We show how distributed failure detector oracles can be used to address fundamental problems such as consensus, k-set agreement, or mutual exclusion. Then, we focus on new advances and open issues for taking into account the dynamic of the infrastructure.
The recent rise of modern Artificial Intelligence has been supported by large scale operational deployments of machine learning algorithms. The dominant technology today in this field is Deep Learning, the modern name of an older technology – Artificial Neural Networks. Is there something special about these methods that make them different from alternative machine learning or statistical techniques? What are the future evolutions of this domain? Is it only a new episode of the Neural Network saga or is it the sign of a deeper and definitive evolution of AI? I will draw an historical perspective on the domain, introducing the main challenges, concepts and evolutions of the field. I will describe some of the recent advances and try to put in evidence some future challenges. This will be illustrated via several application domains in the field of semantic data analysis.
As we move into the exascale era and beyond, high performance computing systems will become more and more resource constrained, and they will face this problem with a growing number of different resources. To solve this problem we need new and more adaptive resource management approaches that can deal with multi-constraint scenarios and that can adjust themselves to changing conditions in the system. In the first part of the talk, I will discuss these challenges using constraints on power and energy as an example and will show how this, in some cases, can have unexpected consequences on application performance.
To solve these challenges, however, we first need to better understand the exact behavior of our systems, their bottlenecks and the impact our workloads have on them. This requires a system wide monitoring and performance data management - from system level measurements to application feedback - combined with the matching analytics capabilities. In the second part of the talk I will discuss concepts to enable such monitoring, how they can be used to feed user facing tools, as well as can be used for new resource management schemes. This is part of a first step towards a more efficient utilization of the scarce resources and can ultimately lead to new design tradeoffs for future systems.
This talk is about experience implementing machine learning in a fully decentralized way on low cost home devices, which can potentially lead to large improvements in privacy. The two-sided market of Cloud Analytics emerged almost accidentally, initially from click-through associated with users’ response to search results, and then adopted by many other services, whether web mail or social media. The business model seen by the user is of a free service (storage and tools for photos, video, social media etc). The value to the provider is untrammeled access to the users' data over space and time, allowing upfront income from the ability to run recommenders and targeted adverts, to background market research about who is interested in what information, goods and services, when and where. The value to the user is increased personalisation. This all comes at a cost, both of privacy (and the risk of loss of reputation or even money) for the user, and at the price of running highly expensive data centers for the providers, and increased cost in bandwidth and energy consumption (mobile network costs & device battery life). The attack surface of our lives expands to cover just about everything.
This talk will examine several alternative directions that this will evolve in the future.
AV material (video, audio, text, images) forms the backbone of the BBC’s current output, recent storage and historical archives. Our R&D teams work across a number of different domains. One specific problem space involves dealing with huge amounts of data split across sites, spanning many broadcast channels and formats. Our research helps the BBC create, capture, store, and analyse multimedia content.
Software is at the heart of our digital society and embodies a growing part of our scientific, technical and organisational knowledge, to the point that we can say it is now part of our cultural heritage. The Software Heritage project's stated mission is to ensure that this precious body of knowledge will be preserved over time and made available to all.