Skip to main content

ICML 2024: Research Paper Reviews

Machine learning is a rapidly evolving field. To stay ahead of the curve, we actively encourage our quantitative researchers and machine learning engineers to attend conferences like ICML so they can engage in the latest cutting-edge research.

In this ICML paper review series, our team share their insights on the most interesting research and papers presented at the conference. They discuss the latest advancements in ML, offering a comprehensive overview of the field and where it is heading. Through this series, you will gain valuable insights into the latest trends and developments in ML.

Follow the links to read each set of ICML 2024 paper reviews.

Paper review #1
  • Arrows of Time for Large Language Models
  • Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality

Yousuf, Machine Learning Engineer

Read now
Paper review #2
  • Compute Better Spent: Replacing Dense Layers with Structured Matrices
  • Emergent Equivariance in Deep Ensembles

Danny, Machine Learning Engineer

Read now
Paper review #3
  • A Universal Class of Sharpness-Aware Minimization Algorithms
  • Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks

Jonathan, Software Engineer

Read now
Paper review #4
  • Trained Random Forests Completely Reveal your Dataset
  • Test-of-time Award: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

Evgeni, Senior Quantitative Researcher

Read now
Paper review #5
  • Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
  • Physics of Language Models: Part 3.1, Knowledge Storage and Extraction

Michael, Scientific Director

Read now
Paper review #6
  • I/O Complexity of Attention, or How Optimal is Flash Attention?
  • Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff

Fabian, Senior Quantitative Researcher

Read now
Paper review #7
  • Offline Actor-Critic Reinforcement Learning Scales to Large Models
  • Information-Directed Pessimism for Offline Reinforcement Learning

Ingmar, Quantitative Researcher

Read now
Paper review #8
  • Better & Faster Large Language Models via Multi-token Prediction
  • Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations

Oliver, Quantitative Researcher

Read now

Our Networking Party

Latest News

G-Research March 2025 Grant Winners
  • 22 Apr 2025

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our March grant winners.

Read article
Invisible Work of OpenStack: Eventlet Migration
  • 25 Mar 2025

Hear from Jay, an Open Source Software Engineer, on tackling technical debt in OpenStack. As technology evolves, outdated code becomes inefficient and harder to maintain. Jay highlights the importance of refactoring legacy systems to keep open-source projects sustainable and future-proof.

Read article
SXSW 2025: Key takeaways from our Engineers
  • 24 Mar 2025

At G-Research we stay at the cutting edge by prioritising learning and development. That’s why we encourage our people to attend events like SXSW, where they can engage with industry experts and explore new ideas. Hear from two Dallas-based Engineers, as they share their key takeaways from SXSW 2025.

Read article

Latest Events

  • Quantitative Engineering
  • Quantitative Research

SIAM Conference on Financial Mathematics and Engineering

15 Jul 2025 - 18 Jul 2025 Hyatt Regency Miami, 400 SE 2nd St, Miami, FL 33131, United States
  • Quantitative Engineering
  • Quantitative Research

MPP/MPQ Career Day

30 Apr 2025 Max Planck Institute for Physics, Boltzmannstraße 8, 85748 Garching bei München, Germany
  • Quantitative Engineering
  • Quantitative Research

Imperial PhD Careers Fair

10 Jun 2025 Queen's Tower Rooms, Sherfield Building, South Kensington Campus, Imperial College London, London, SW7 2AZ

Stay up to date with
G-Research