# DynCL: Self-supervised contrastive learning performs non-linear system identification

**tl;dr:**
A framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model using contrastive learning.

### News

October '24 | Our preprint is now available on arXiv! |

### Abstract

Self-supervised learning (SSL) approaches have brought tremendous success across many tasks and domains. It has been argued that these successes can be attributed to a link between SSL and identifiable representation learning: Temporal structure and auxiliary variables ensure that latent representations are related to the true underlying generative factors of the data. Here, we deepen this connection and show that SSL can perform system identification in latent space. We propose DynCL, a framework to uncover linear, switching linear and non-linear dynamics under a non-linear observation model, give theoretical guarantees and validate them empirically.

### Overview

We compare standard contrastive learning (without a dynamics model) to DynCL with an explicit dynamics model. When noise dominates the evolution of latents, both methods perform similarly. However, when the dynamics dominate, DynCL outperforms standard contrastive learning.

### Reference

@article{gozalezlaizschmidt2024dyncl, author = {González Laiz, Rodrigo and Schmidt, Tobias and Schneider, Steffen}, title={Self-supervised contrastive learning performs non-linear system identification}, journal={CoRR}, year={2024}, month={October}, url={https://arxiv.org/abs/2410.14673} }