Skip to main content

ICML 2024: Research Paper Reviews

Machine learning is a rapidly evolving field. To stay ahead of the curve, we actively encourage our quantitative researchers and machine learning engineers to attend conferences like ICML so they can engage in the latest cutting-edge research.

In this ICML paper review series, our team share their insights on the most interesting research and papers presented at the conference. They discuss the latest advancements in ML, offering a comprehensive overview of the field and where it is heading. Through this series, you will gain valuable insights into the latest trends and developments in ML.

Follow the links to read each set of ICML 2024 paper reviews.

Paper review #1
  • Arrows of Time for Large Language Models
  • Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality

Yousuf, Machine Learning Engineer

Read now
Paper review #2
  • Compute Better Spent: Replacing Dense Layers with Structured Matrices
  • Emergent Equivariance in Deep Ensembles

Danny, Machine Learning Engineer

Read now
Paper review #3
  • A Universal Class of Sharpness-Aware Minimization Algorithms
  • Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks

Jonathan, Software Engineer

Read now
Paper review #4
  • Trained Random Forests Completely Reveal your Dataset
  • Test-of-time Award: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

Evgeni, Senior Quantitative Researcher

Read now
Paper review #5
  • Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
  • Physics of Language Models: Part 3.1, Knowledge Storage and Extraction

Michael, Scientific Director

Read now
Paper review #6
  • I/O Complexity of Attention, or How Optimal is Flash Attention?
  • Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff

Fabian, Senior Quantitative Researcher

Read now
Paper review #7
  • Offline Actor-Critic Reinforcement Learning Scales to Large Models
  • Information-Directed Pessimism for Offline Reinforcement Learning

Ingmar, Quantitative Researcher

Read now
Paper review #8
  • Better & Faster Large Language Models via Multi-token Prediction
  • Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations

Oliver, Quantitative Researcher

Read now

Our Networking Party

Latest News

G-Research 2024 PhD prize winners: SOCINT
  • 11 Mar 2025

Every year, G-Research runs a number of different PhD prizes in Maths and Data Science at universities in the UK, Europe and beyond. We're pleased to announce the winners of this prize, run in conjunction with Società Italiana di Intelligence.

Read article
G-Research Scholarships: We’re fully funding 42 PhD students
  • 25 Feb 2025

We’re thrilled to announce the launch of a brand-new Scholarships programme, fully-funding 42 PhD students across the UK through our NextGen programme.

Read article
G-Research January 2025 Grant Winners
  • 24 Feb 2025

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our January grant winners.

Read article

Latest Events

  • Platform Engineering
  • Software Engineering

Warsaw Coding Challenge

18 Mar 2025 Hotel Bristol, Krakowskie Przedmiescie 42/44, 00-325 Warsaw
  • Platform Engineering
  • Software Engineering

Belgrade Coding Challenge

20 Mar 2025 Saint Ten Hotel, Svetog Save 10, Beograd 11000, Serbia
  • Quantitative Engineering
  • Quantitative Research

London Quant Challenge

19 Mar 2025 G-Research, 1 Soho Place, London, W1D 3BG

Stay up to date with
G-Research