Skip to main content
Back to News
NeurIPS Paper Reviews 2024 #7

NeurIPS Paper Reviews 2024 #7

7 February 2025
  • News
  • Quantitative Research

Cedric, Quantitative Researcher

In this paper review series, our team of researchers and machine learning practitioners discuss the papers they found most interesting at NeurIPS 2024.

Here, discover the perspectives of Quantitative Researcher, Cedric.

Preference Alignment with Flow Matching

Minu Kim, Yongsik Lee, Sehyeok Kang, Jihwan Oh, Song Chong, Se-Young Yun

A big trend at NeurIPS this year was generative modelling, in particular diffusion and flow matching methods.

This paper applies some of the advances in flow matching to reinforcement learning with human feedback, a form of preference alignment where the aim is to align the behaviour of a given model with human (or an AI proxy) preference.

Whereas some of the previous techniques in that field require access to the model weights (and to possibly significant computing power) for fine-tuning, or learning a reward model that can be prone to overfitting, Preference Flow Matching (PFM) only requires access to the inference model as a black-box, and to a way of determining which of two model samples is preferred for a given conditioning input, without learning any reward model.

Given these and a distribution of inputs, one can define the distributions of less preferred data and of more preferred data in the sample space by comparing outputs two by two. PFM then learns a time-dependent flow from the former to the latter. At inference time, given a sample from the base model, one can simply flow it towards the more preferred distribution to obtain a better sample.

The authors apply their techniques to several datasets, notably MNIST and IMBD where the preference is given by the logits from a CNN or a sentiment classifier, and various offline reinforcement learning tasks from D4RL, demonstrating that the preference objective is attained.

They also include several theoretical results, showing that PFM indeed “narrows” the base model distribution towards the points where the preference is increasing. Finally, they note that an iterative application of PFM is possible and can be beneficial.

Preference Alignment with Flow Matching
NeurIPS 2023 Paper Reviews

Read paper reviews from NeurIPS 2023 from a number of our quantitative researchers and machine learning practitioners.

Read now

A Generative Model of Symmetry Transformations

James Urquhart AllinghamBruno Kacper MlodozeniecShreyas PadhyJavier AntoranDavid KruegerRichard E. TurnerEric NalisnickJosé Miguel Hernández-Lobato

This paper proposes a Symmetry-aware Generative Model (SGM), a method for modelling data distributions presenting potential symmetries by learning the data distribution along each orbit of a prescribed symmetry group.

More specifically, given a group acting on a space and a set of observations from that space, the SGM learns a function mapping arbitrary data points to a choice of representative for their orbit, and the distribution of the data along the orbits (as a normalizing flow on the symmetry group).

These two networks are trained using maximum likelihood. The authors also introduce an invertibility loss to account for the fact that image transformations are usually not invertible due to boundary effects and interpolation onto a discrete grid, as well as an invariance loss to ensure the choice of orbit representative is consistent over the orbit.

This dataset representation allows the inspection of the data distribution along the orbit of a given element, in order to find symmetries or the absence thereof, by simply querying the distribution associated to the orbit.

As an experiment, the authors investigate the MNIST, galaxy-MNIST and dSprites datasets under affine transformations and colour rotations; their model convincingly recovers the symmetries introduced in these datasets. Their method also facilitates the creation of data augmentations which are aware of the symmetry already present in the data, and of models which are invariant to these symmetries.

They show on the MNIST datasets that both VAEs working at the level of the orbit representatives (as computed by SGM) and VAEs augmented by SGM outperform VAEs and classically-augmented VAEs, especially in the low-data regime.

A Generative Model of Symmetry Transformations
Quantitative Research and Machine Learning

Want to learn more about life as a researcher at G-Research?

Learn more

Read more paper reviews

ICML 2024: Paper Review #1

Discover the perspectives of Casey, one of our Machine Learning Engineer, on the following papers:

  • Towards scalable and stable parallelization of nonlinear RNNs
  • logarithmic math in accurate and efficient AI inference accelerators
Read now
ICML 2024: Paper Review #2

Discover the perspectives of Trenton, one of our Software Engineer, on the following papers:

  • FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision
  • Parallelizing Linear Transformers with the Delta Rule over Sequence Length
  • RL-GPT: Integrating Reinforcement Learning and Code-as-policy
Read now
ICML 2024: Paper Review #3

Discover the perspectives of Mark, one of our Senior Quantitative Researcher, on the following papers:

  • Why Transformers Need Adam: A Hessian Perspective
  • Poisson Variational Autoencoder
  • Noether’s Razor: Learning Conserved Quantities
Read now
ICML 2024: Paper Review #4

Discover the perspectives of Angus, one of our Machine Learning Engineer, on the following papers:

  • einspace: Searching for Neural Architectures from Fundamental Operations
  • SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
Read now
ICML 2024: Paper Review #5

Discover the perspectives of Dustin, one of our Scientific Directors, on the following papers:

  • QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs
  • An Image is Worth 32 Tokens for Reconstruction and Generation
  • Dimension-free deterministic equivalents and scaling laws for random feature regression
Read now
ICML 2024: Paper Review #6

Discover the perspectives of Georg, one of our Quant Research Manager, on the following papers:

  • Optimal Parallelization of Boosting
  • Learning Formal Mathematics From Intrinsic Motivation
  • Learning on Large Graphs using Intersecting Communities
Read now
ICML 2024: Paper Review #8

Discover the perspectives of Hugh, one of our Scientific Directors, on the following papers:

  • Better by default: Strong pre-tuned MLPs and boosted trees on tabular data
  • Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data
Read now
ICML 2024: Paper Review #9

Discover the perspectives of Andrew, one of our Quant Research Managers, on the following papers:

  • Algorithmic Capabilities of Random Transformers
  • The Road Less Scheduled
  • Time Series in the Age of Large Models
Read now
ICML 2024: Paper Review #10

Discover the perspectives of Julian, one of our Quantitative Researchers, on the following papers:

  • Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression
  • Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization
  • Amortized Planning with Large-Scale Transformers: A Case Study on Chess
Read now

Latest News

Invisible Work of OpenStack: Eventlet Migration
  • 25 Mar 2025

Hear from Jay, an Open Source Software Engineer, on tackling technical debt in OpenStack. As technology evolves, outdated code becomes inefficient and harder to maintain. Jay highlights the importance of refactoring legacy systems to keep open-source projects sustainable and future-proof.

Read article
SXSW 2025: Key takeaways from our Engineers
  • 24 Mar 2025

At G-Research we stay at the cutting edge by prioritising learning and development. That’s why we encourage our people to attend events like SXSW, where they can engage with industry experts and explore new ideas. Hear from two Dallas-based Engineers, as they share their key takeaways from SXSW 2025.

Read article
G-Research February 2025 Grant Winners
  • 17 Mar 2025

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our February grant winners.

Read article

Latest Events

  • Quantitative Engineering
  • Quantitative Research

MPP/MPQ Career Day

30 Apr 2025 Max Planck Institute for Physics, Boltzmannstraße 8, 85748 Garching bei München, Germany
  • Quantitative Engineering
  • Quantitative Research

Imperial PhD Careers Fair

10 Jun 2025 Queen's Tower Rooms, Sherfield Building, South Kensington Campus, Imperial College London, London, SW7 2AZ
  • Platform Engineering
  • Software Engineering

Oxbridge Women in Computer Science Conference

03 May 2025 The William Gates Building 15 JJ Thomson Avenue, Cambridge, CB3 0FD

Stay up to date with
G-Research