Blog

Filling gaps in ocean satellite data -Aida Alvera-Azcárate & Alexander Barth

Link to the slides.

The seminar is on May the 6th, at 14:00 and will be held remotely, in english.

Link to the zoom session: https://zoom.us/j/93683521817

Aida Alvera-Azcárate’s presentation is entitled:

« Filling gaps in ocean satellite data »

Abstract:

 Satellite data offer an unequalled amount of information of the Earth’s surface, including the ocean. However, data measured using visible and infrared wavebands are affected by the presence of clouds and have therefore a large amount of missing data (on average, clouds cover about 75% of the Earth). The spatial and temporal scales of variability in the ocean require techniques able to handle undersampling of the dominant scales of variability. The GHER (GeoHydrodynamics and Environment Research) of the University of Liege in Belgium has been working over the last two decades on interpolation techniques for satellite and in situ ocean data. In this talk we will focus on techniques developed for satellite data. We’ll start with DINEOF – Data Interpolating Empirical Orthogonal Functions- which is a data-driven technique using EOFs to infer missing information in satellite datasets. We will follow with a more recent development, DINCAE – Data Interpolating Convolutional AutoEncoder. Training a neural network with incomplete data is problematic, and this is overcome in DINCAE by using the satellite data and its expected error variance as input. The autoencoder provides the reconstructed field along with its expected error variance as output. We will provide examples of reconstructed satellite data for several variables, like sea surface temperature, chlorophyll concentration, and some recent developments with DINCAE to grid altimetry data to complete fields.

Short bios:

Aida Alvera-Azcárate is a researcher at the GHER (GeoHydrodynamics and Environment Research) of the University of Liege in Belgium. She did a PhD in Science at the University of Liege and made a post-doc at the University of South Florida (US) before joining the GHER in 2007 where she studies the ocean using satellite and in situ data and works in the development of interpolation techniques to reconstruct satellite data.

Alexander Barth is a researcher working at the University of Liege (Belgium) in the GHER group (GeoHydrodynamics and Environment Research). He did a PhD on nested numerical ocean models and data assimilation. Currently he is working on variational analysis schemes for climatologies and neural networks to reconstruct missing data.

Narrowing uncertainties of climate projections using data science tools? -Pierre Tandeo

The seminar is on March the 26th, at 10 o’clock and will be held remotely, in english.

The slides can be found here. We are currently uploading the video of the talk and will be adding a link to it as soon as it uploads.

Link to the zoom session: https://zoom.us/j/94170014183

Pierre Tandeo’s presentation is entitled:

« Narrowing uncertainties of climate projections using data science tools? »

Abstract:

 Climate indices show large variability in CMIP climate predictions. In this presentation, we propose to weight multi-model climate simulations to reduce the uncertainty in climate predictions, and better estimate the future evolution of climate indices. The proposed methodology is based on advanced data science tools (i.e, data assimilation, analog forecasting, model evidence metrics), to accurately compute distances between current observations and simulated climate indices. This low-cost procedure is tested on a simplified climate model. The results show that the methods can be applied locally and is able to identify relevant parameterizations.

Short bio:

Pierre Tandeo an associate professor at IMT Atlantique (Brest, France) and an associate researcher at the Data Assimilation Research Team, RIKEN Center for Computational Science (Kobe, Japan). More information:  https://tandeo.wordpress.com/.

Working group 3: Pierre Lepetit – Estimation of visibility and snow height on webcam images with learning to rank approach

L’Atelier interne « SCAI & AI4Climate » réunit les chercheurs, ingénieurs, doctorants, post doctorants concernés par les thématiques liées à conception et l’utilisation de nouvelles méthodes d’Intelligence Artificielle pour l’étude de l’environnement, allant du modèle à l’observation. Les premières réunions seront consacrées aux travaux des doctorants. L’exposé sera suivi d’une discussion avec les participants sur l’approche et les perspectives possibles du travail. 

16 Mars à 10h
sur le campus de Jussieu,
Salle de réunion SCAI
Batiment Esclangon 1er étage

Participer à la réunion Zoom
(voir information de connexion ci-dessous)

  • The image-based estimation of meteorological parameters provides clear benefits for surface weather observation. When a local event arises, as a dense fog or a snow settling, webcams and CCTV cameras are sources of valuable information. These images actually inform about the class of weather (sunny, rainy, foggy, snowy, etc). They also enable to gauge quantitative parameters as the horizontal visibility (the farest you can see), the snow height, the precipitation rate, etc, with a variable precision.
  • Recently, the weather classification task has been successfully addressed by deep learning approaches. However, the quantitative estimation faces a strong difficulty: the existing data sets that contain both images and precise weather measurements are rare and involve only few different outdoor scenes. It is virtually impossible for an expert to assign image-wise quantitative labels, but it is possible to compare two images from the same webcam and therefore assign pairwise labels. An “uncomparable” label being assigned to couples for which the expert is not able to distinguish the two images with respect to the parameter.
  • This analysis gives the starting point of the workshop. The discussion will deal with the methods of labeling, learning to rank and calibration that may help to yield such comparisons and to predict ordinal or quantitative estimations of visibility and snow height. The way uncomparable pairs could lead to predict an image-wise uncertainty will also be addressed.

Participer à la réunion Zoom

https://zoom.us/j/98278319724

ID de réunion : 982 7831 9724

Trouvez votre numéro local : https://zoom.us/u/agSnuNJYM

Machine learning and natural hazards – Sophie Giffard-Roisin

Link for the slides

The seminar is on February 10th 14:00 and will be held remotely.

Link to the zoom session: https://us02web.zoom.us/j/88657656183

Sophie Giffard-Roisin presentation is entitled:

« Machine learning and natural hazards »

The goal of this talk is to show how we can use the strength of artificial intelligence to help making diagnosis and finding concrete and local solutions to natural hazards. Tropical cyclones, avalanches, earthquakes or landslides affects often vulnerable areas and populations, where the understanding of the phenomena and better risk assessment and predictions can make a substantial impact. The data available to monitor these natural phenomena has considerably increased in the recent years. For example, SAR (synthetic aperture radar) imaging data, provided by the Sentinel 1 satellites, is now freely available up to every 6 days in a majority of regions, even remote areas. Yet, artificial intelligence (AI) and machine learning (ML) have only scarcely been used in these domains. But these techniques have already showed their impact in many scientific fields having similar data structures (large volume of data, presence of noise, complex physical phenomena) such as medical imaging (detection/segmentation of pathologies), crop yield (prediction), security (recognition). We will see in this talk, with concrete examples, how to design machine learning models for specific tasks with real imaging or temporal data inputs. Concretely, starting mainly from convolutional neural networks, what are the key aspects to consider and what are pitfalls to avoid?

Short bio:
Sophie Giffard-Roisin is a researcher hired by IRD (French National Institute for Sustainable Development) and based at ISTerre, Grenoble (UGA, France). Her work focuses on machine learning applications for natural hazards, especially using remote sensing and time series data. She did her PhD at Inria, Nice (France) under the supervision of Nicholas Ayache on machine learning and modelling for medical image analysis. Then she did a post-doc in CU Boulder, Colorado (USA) in Claire Monteleoni’s team where she worked on climate and meteorological applications of machine learning. She moved to ISTerre, the Earth Science Laboratory of Grenoble Université (UGA, France), for a permanent position in 2019 where she now focuses on machine learning for natural hazards in geosciences.

2nd Working Group: Learning dynamics from partial and noisy observation with the help of Data Assimilation – Arthur Filoche

11 December, 10 o’clock, Salle de réunion SCAI, Jussieu.
Batiment Esclangon 1er étage

Participer à la réunion Zoom
https://us02web.zoom.us/j/89174956656

Geosciences have long-standing experience in modeling, forecasting, or estimating complex dynamical systems like the atmosphere or the ocean. Most of these models came from physical laws and are described by PDE. Usually, sparse and noisy observations of such systems are available. The first need to produce a forecast is to estimate initial conditions. This is usually done via Data Assimilation (DA), a set of methods that optimally combines a dynamical model and observations, focusing on system state estimation. In variational formalism, it’s a PDE-constrained optimization problem that requires adjoint modeling to calculate gradients. This field is very close to Machine Learning (ML) in the sense that both learn from data.

ML algorithms have demonstrated impressive results of spatiotemporal forecasting, but to do so it needs dense data which is rarely the case in earth sciences. Also, tools provided by the deep learning community based on automatic differentiation are particularly suitable for variational DA, avoiding explicit adjoint modeling.

What motivates this discussion is that physics-based model is often
incomplete and machine learning can provide a learnable class of model
while data assimilation can provide dense data.

The a similar talk can be found here, and an early conference paper can be found here.

Power-efficient deep learning algorithms – Sébastien Loustau

Link for the slides

Next seminar is on October 14th October (14:30) in « Campus Pierre & Marie Curie » of Sorbonne University. It will take place in SCAI seminar room, building « Esclangon », 1st floor

Si vous souhaitez assister en personne à ce séminaire:

Sébastien présentera ses travaux à la salle de séminaire de SCAI (plan d’accès: https://ai4climate.lip6.fr/wp-content/uploads/2020/09/plan_SCAI_extrait.pdf)
Merci de vous inscrire sur ce lien : https://docs.google.com/forms/d/e/1FAIpQLSc4scBTJZnOquz2FZkQbPKAKEvacQ0BC52WKs52CzTD6amCAw/viewform?usp=sf_link
Nous vous conseillons néanmoins d’apporter avec vous votre ordinateur portable afin d’être connecté en même temps sur la salle zoom (voir ci-dessous)


Si vous souhaitez assister à distance: 

Voici le lien zoom: https://us02web.zoom.us/j/81893439500
Vous pourrez également poser des questions sur le chat qui seront retransmises dans la salle.

Sebastien Loustau presentation is entitled:

« Power-efficient deep learning algorithms»

Abstract:
In this talk, I will present both theoretical and practical aspect of how designing power-efficient deep learning algorithms. After a non-exhaustive survey of different contributions about the machine learning perspective (training low bit-width networks), the hardware counterpart (CNNs accelerators) and the relationship with Auto-ML and the NAS procedure, I will present a theoretically based approach to add the power efficiency constraint into the optimization procedure of training deep nets. This work in progress bridges optimal transport and information theory with online learning.

Short bio:
Sébastien is a researcher in mathematical statistics and Machine Learning. He has studied the theoretical aspect of both statistical and online learning. His research interests include online learning, unsupervised learning, adaptive algorithms and minimax theory. He also founded LumenAI 5 years ago.

Journal club meetings

Next meetings

Past meetings

For any question/suggestion please contact Redouane Lguensat: rlguensat at ipsl dot fr

Tenure track in Statistical learning at Ecole Polytechnique

Ecole polytechnique  is opening a tenure track position on statistical learning and artificial intelligence for energy/climate. The description is attached. See also
https://gargantua.polytechnique.fr/siatel-web/linkto/mICYYYSdehW

This is a position between the Applied Math departement  and the Meteorology  departement.We are primarily interested in applicants whose research in statistical learning and artificial intelligence shall contribute to address societal challenges in energy, sustainability and climate change (e.g. statistical learning for energy efficiency, for load curve prediction, for pricing mechanisms, for smart grid control, for load curve disaggregation, etc.). The Ecole Polytechnique offers an exceptional environment, with an adapted teaching load, as well as the support (scientific, administrative and budgetary). 
To  apply, follow the link 
https://candidatures-calliope.polytechnique.fr/calliope-fo/recherche/index.php?lang=en
The position  number is 71.