Apple at ICML 2025: Driving Innovation in AI and ML Research
Apple has long been at the forefront of technological innovation, and its commitment to advancing Artificial Intelligence (AI) and Machine Learning (ML) is evident. This year, Apple is excited to participate in the International Conference on Machine Learning (ICML) in Vancouver, Canada, where it will once again be an industry sponsor. As part of its commitment to fostering innovation within the broader research community, Apple shares insights and findings through publications and active engagement at top conferences like ICML.
Engaging Presentations and Cutting-Edge Research
At ICML, Apple researchers will present an array of groundbreaking research topics in AI and ML. Attendees will have the opportunity to learn about advancements in areas such as computer vision, language models, diffusion models, reinforcement learning, and more. Each paper highlights innovative approaches and solutions to some of the most pressing challenges in the field.
Improving Simulation-Based Inference
In fields like science and engineering, complex computer simulations have become essential for understanding real-world phenomena. A particularly promising technique, Simulation-based Inference (SBI), has emerged but often struggles with model misspecification. Apple researchers will discuss their paper, Addressing Misspecification in Simulation-based Inference through Data-driven Calibration, revealing the robust posterior estimation (RoPE) framework. RoPE employs a small calibration set of real-world parameter measurements to overcome challenges related to model misspecification, providing a new approach to extracting confidence intervals over parameters of interest.
Normalizing Flows for Image Generation
While diffusion models have dominated recent discussions in image generation, Apple researchers will present their findings on Normalizing Flows (NFs), specifically in their paper, Normalizing Flows are Capable Generative Models. This research demonstrates that NFs are more powerful than previously believed. The newly introduced TarFlow architecture shows significant promise in high-quality image generation, achieving state-of-the-art results in likelihood estimation and producing samples comparable in quality and diversity to existing diffusion models.
Advancing Theoretical Understanding of Diffusion Models Composition
The potential of composing outputs from different pretrained models opens new avenues for creativity and innovation in generation tasks. Apple’s paper, Mechanisms of Projective Composition of Diffusion Models, dives into the theoretical foundations of this concept, specifically focusing on out-of-distribution extrapolation. The researchers offer insights into why certain linear score combinations from different models can yield desirable outcomes, laying a foundation for future explorations in generative modeling.
Scaling Laws for LLM Fine-Tuning
Fine-tuning Language Models (LLMs) is a common challenge, particularly when data is limited. Apple researchers will highlight their findings in the paper, Scaling Laws for Forgetting During Finetuning with Pretraining Data Injection, quantifying the challenges of overfitting and forgetting incurred during fine-tuning. Their research provides practical insights into the efficient mixing of pre-training and target data, revealing that just 1% of pre-training data can significantly mitigate forgetting, enhancing LLM performance in specific domains.
Pre-training Specialist Language Models
Efficiently training specialist language models poses unique challenges, especially when data is scarce. In their paper, Soup-of-Experts: Pretraining Specialist Models via Parameters Averaging, Apple researchers introduce a novel architecture that taps into shared parameters while allowing for specialized adaptation without extensive retraining. This innovative approach is particularly effective when responding to multiple specialized tasks, significantly reducing computational costs.
Learning Autonomy from Self-Play Reinforcement Learning
Reinforcement learning has found many applications, from games to robotics, but its use in autonomous driving is particularly promising. Apple’s research paper, Robust Autonomy Emerges from Self-Play, highlights the capabilities of the GIGAFLOW simulator in enabling state-of-the-art performance in autonomous driving. The researchers utilized cutting-edge self-play techniques, achieving unprecedented robustness by training on an immense dataset, enabling their learned policy to remain resilient and reliable in real-world driving simulations.
Interactive Experiences at the Apple Booth
Visitors to the Apple booth (#307) during ICML can engage with live demonstrations of Apple’s latest ML research. One key highlight will be MLX, a flexible array framework optimized for Apple silicon. Attendees will witness demonstrations of fine-tuning a 7B parameter LLM on an iPhone, engaging in image generation using a large diffusion model on an iPad, and text generation on an M2 Ultra Mac Studio.
Supporting the ML Research Community
Apple remains dedicated to supporting underrepresented groups in the ML community. This year, the company is proud to sponsor various affinity groups facilitating events at ICML, including LatinX in AI and Women in Machine Learning (WiML), underscoring its commitment to diversity and inclusion.
Learn More about Apple’s ML Research at ICML 2025
As an influential gathering of researchers and innovators in AI and ML, ICML serves as a platform for sharing knowledge and cultivating new ideas. This article showcases just a selection of Apple ML researchers’ contributions to ICML 2025. For a comprehensive overview of Apple’s participation and a detailed event schedule, visit Apple’s dedicated ICML 2025 page.