Show Detail |
Timezone: America/Vancouver
|
Filter Rooms:
SUN 13 JUL
11 a.m.
(ends 5:30 PM)
2 p.m.
Expo Talk Panel:
(ends 3:00 PM)
Expo Talk Panel:
(ends 3:00 PM)
3 p.m.
4 p.m.
Expo Talk Panel:
(ends 5:00 PM)
Expo Talk Panel:
(ends 5:00 PM)
5 p.m.
Expo Talk Panel:
(ends 6:00 PM)
Expo Talk Panel:
(ends 6:00 PM)
MON 14 JUL
7:30 a.m.
(ends 6:00 PM)
(ends 3:00 PM)
8 a.m.
9 a.m.
9:30 a.m.
Tutorial:
(ends 12:00 PM)
Tutorial:
(ends 12:00 PM)
Tutorial:
(ends 12:00 PM)
noon
1:30 p.m.
Tutorial:
(ends 4:00 PM)
4 p.m.
(ends 8:00 PM)
Expo Demonstration:
(ends 8:00 PM)
Expo Demonstration:
(ends 8:00 PM)
Expo Demonstration:
(ends 8:00 PM)
4:30 p.m.
6:30 p.m.
TUE 15 JUL
7:30 a.m.
(ends 6:00 PM)
(ends 12:00 PM)
8 a.m.
8:30 a.m.
10 a.m.
Orals 10:00-11:00
[10:00]
Multi-agent Architecture Search via Agentic Supernet
[10:15]
Training a Generally Curious Agent
[10:30]
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs
[10:45]
CollabLLM: From Passive Responders to Active Collaborators
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards
[10:15]
Position: A Critical Perspective on The Value in Studying Deep Learning Phenomena
[10:30]
Position: Certified Robustness Does Not (Yet) Imply Model Security
[10:45]
Position: Probabilistic Modelling is Sufficient for Causal Inference
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
VideoRoPE: What Makes for Good Video Rotary Position Embedding?
[10:15]
ReferSplat: Referring Segmentation in 3D Gaussian Splatting
[10:30]
Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection
[10:45]
VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
Algorithm Development in Neural Networks: Insights from the Streaming Parity Task
[10:15]
Learning Dynamics in Continual Pre-Training for Large Language Models
[10:30]
Strategy Coopetition Explains the Emergence and Transience of Emergent In-Context Learning
[10:45]
Transformative or Conservative? Conservation laws for ResNets and Transformers
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
An analytic theory of creativity in convolutional diffusion models
[10:15]
Layer by Layer: Uncovering Hidden Representations in Language Models
[10:30]
Scaling Collapse Reveals Universal Dynamics in Compute-Optimally Trained Neural Networks
[10:45]
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
(ends 11:00 AM)
11 a.m.
Posters 11:00-1:30
Convergence of Mean-Field Langevin Stochastic Descent-Ascent for Distributional Minimax Optimization
Double-Filter: Efficient Fine-tuning of Pre-trained Vision-Language Models via Patch&Layer Filtering
GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention
PEINR: A Physics-enhanced Implicit Neural Representation for High-Fidelity Flow Field Reconstruction
(ends 1:30 PM)
1 p.m.
2 p.m.
3:30 p.m.
Orals 3:30-4:30
[3:30]
DeFoG: Discrete Flow Matching for Graph Generation
[3:45]
MGD$^3$ : Mode-Guided Dataset Distillation using Diffusion Models
[4:00]
Inductive Moment Matching
[4:15]
Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Position: Generative AI Regulation Can Learn from Social Media Regulation
[3:45]
Position: Current Model Licensing Practices are Dragging Us into a Quagmire of Legal Noncompliance
[4:00]
Position: AI Agents Need Authenticated Delegation
[4:15]
Position: AI Safety should prioritize the Future of Work
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Controlling Underestimation Bias in Constrained Reinforcement Learning for Safe Exploration
[3:45]
Temporal Difference Flows
[4:00]
Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning
[4:15]
Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
AdaSplash: Adaptive Sparse Flash Attention
[3:45]
Accelerating LLM Inference with Lossless Speculative Decoding for Heterogeneous Vocabularies
[4:00]
ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features
[4:15]
Mixture of Lookup Experts
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Hierarchical Refinement: Optimal Transport to Infinity and Beyond
[3:45]
Fully Dynamic Euclidean Bi-Chromatic Matching in Sublinear Update Time
[4:00]
Flowing Datasets with Wasserstein over Wasserstein Gradient Flows
[4:15]
Addressing Misspecification in Simulation-based Inference through Data-driven Calibration
(ends 4:30 PM)
4:30 p.m.
Posters 4:30-7:00
A Unified Comparative Study with Generalized Conformity Scores for Multi-Output Conformal Regression
Automatically Identify and Rectify: Robust Deep Contrastive Multi-view Clustering in Noisy Scenarios
(ends 7:00 PM)
WED 16 JUL
7:30 a.m.
(ends 6:00 PM)
(ends 12:00 PM)
8 a.m.
8:30 a.m.
(ends 9:30 AM)
10 a.m.
Orals 10:00-11:00
[10:00]
Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction
[10:15]
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark
[10:30]
rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking
[10:45]
VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
A Generalization Theory for Zero-Shot Prediction
[10:15]
Statistical Test for Feature Selection Pipelines by Selective Inference
[10:30]
Learning with Expected Signatures: Theory and Applications
[10:45]
Blink of an eye: a simple theory for feature localization in generative models
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
Outlier Gradient Analysis: Efficiently Identifying Detrimental Training Samples for Deep Learning Models
[10:15]
Foundation Model Insights and a Multi-Model Approach for Superior Fine-Grained One-shot Subset Selection
[10:30]
SK-VQA: Synthetic Knowledge Generation at Scale for Training Context-Augmented Multimodal LLMs
[10:45]
Improving the Scaling Laws of Synthetic Data with Deliberate Practice
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
Nonlinearly Preconditioned Gradient Methods under Generalized Smoothness
[10:15]
An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions
[10:30]
Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton
[10:45]
General framework for online-to-nonconvex conversion: Schedule-free SGD is also effective for nonconvex optimization
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
One-Step Generalization Ratio Guided Optimization for Domain Generalization
[10:15]
An Improved Clique-Picking Algorithm for Counting Markov Equivalent DAGs via Super Cliques Transfer
[10:30]
Polynomial-Delay MAG Listing with Novel Locally Complete Orientation Rules
[10:45]
Sanity Checking Causal Representation Learning on a Simple Real-World System
(ends 11:00 AM)
11 a.m.
Posters 11:00-1:30
An Improved Clique-Picking Algorithm for Counting Markov Equivalent DAGs via Super Cliques Transfers
Enhancing Ligand Validity and Affinity in Structure-Based Drug Design with Multi-Reward Optimization
Explicit Exploration for High-Welfare Equilibria in Game-Theoretic Multiagent Reinforcement Learning
Learning State-Based Node Representations from a Class Hierarchy for Fine-Grained Open-Set Detection
MENTOR: Mixture-of-Experts Network with Task-Oriented Perturbation for Visual Reinforcement Learning
Pointwise Information Measures as Confidence Estimators in Deep Neural Networks: A Comparative Study
(ends 1:30 PM)
1 p.m.
2 p.m.
3:30 p.m.
Orals 3:30-4:30
[3:30]
Sundial: A Family of Highly Capable Time Series Foundation Models
[3:45]
Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation
[4:00]
Partition First, Embed Later: Laplacian-Based Feature Partitioning for Refined Embedding and Visualization of High-Dimensional Data
[4:15]
Equivalence is All: A Unified View for Self-supervised Graph Learning
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Position: AI Competitions Provide the Gold Standard for Empirical Rigor in GenAI Evaluation
[3:45]
Position: Medical Large Language Model Benchmarks Should Prioritize Construct Validity
[4:00]
Position: Principles of Animal Cognition to Improve LLM Evaluations
[4:15]
Position: Political Neutrality in AI Is Impossible — But Here Is How to Approximate It
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
On Differential Privacy for Adaptively Solving Search Problems via Sketching
[3:45]
Going Deeper into Locally Differentially Private Graph Neural Networks
[4:00]
Auditing $f$-differential privacy in one run
[4:15]
Conformal Prediction as Bayesian Quadrature
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
AffectGPT: A New Dataset, Model, and Benchmark for Emotion Understanding with Multimodal Large Language Models
[3:45]
Long-Form Speech Generation with Spoken Language Models
[4:00]
Learning Time-Varying Multi-Region Brain Communications via Scalable Markovian Gaussian Processes
[4:15]
Learning Smooth and Expressive Interatomic Potentials for Physical Property Prediction
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Improved Regret Analysis in Gaussian Process Bandits: Optimality for Noiseless Reward, RKHS norm, and Non-Stationary Variance
[3:45]
High-Dimensional Prediction for Sequential Decision Making
[4:00]
Near-Optimal Decision Trees in a SPLIT Second
[4:15]
Expected Variational Inequalities
(ends 4:30 PM)
4:30 p.m.
Posters 4:30-7:00
CogReact: A Reinforced Framework to Model Human Cognitive Reaction Modulated by Dynamic Intervention
GS-Bias: Global-Spatial Bias Learner for Single-Image Test-Time Adaptation of Vision-Language Models
(ends 7:00 PM)
THU 17 JUL
7:30 a.m.
(ends 6:00 PM)
(ends 12:00 PM)
8:30 a.m.
10 a.m.
Orals 10:00-11:00
[10:00]
STAIR: Improving Safety Alignment with Introspective Reasoning
[10:15]
AutoAdvExBench: Benchmarking Autonomous Exploitation of Adversarial Example Defenses
[10:30]
Exploring and Mitigating Adversarial Manipulation of Voting-Based Leaderboards
[10:45]
Model Immunization from a Condition Number Perspective
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs
[10:15]
ABKD: Pursuing a Proper Allocation of the Probability Mass in Knowledge Distillation via $\alpha$-$\beta$-Divergence
[10:30]
Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning
[10:45]
From Weight-Based to State-Based Fine-Tuning: Further Memory Reduction on LoRA with Parallel Control
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
Rényi Neural Processes
[10:15]
A Unified Framework for Entropy Search and Expected Improvement in Bayesian Optimization
[10:30]
Score Matching with Missing Data
[10:45]
Beyond Self-Repellent Kernels: History-Driven Target Towards Efficient Nonlinear MCMC on General Graphs
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
The dark side of the forces: assessing non-conservative force models for atomistic machine learning
[10:15]
LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models
[10:30]
Neural Discovery in Mathematics: Do Machines Dream of Colored Planes?
[10:45]
Machine Learning meets Algebraic Combinatorics: A Suite of Datasets Capturing Research-level Conjecturing Ability in Pure Mathematics
(ends 11:00 AM)
Orals 10:00-11:00
[10:00]
Statistical Query Hardness of Multiclass Linear Classification with Random Classification Noise
[10:15]
All-Purpose Mean Estimation over R: Optimal Sub-Gaussianity with Outlier Robustness and Low Moments Performance
[10:30]
A Generalization Result for Convergence in Learning-to-Optimize
[10:45]
Theoretical Limitations of Ensembles in the Age of Overparameterization
(ends 11:00 AM)
11 a.m.
Posters 11:00-1:30
From Weight-Based to State-Based Fine-Tuning: Further Memory Reduction on LoRA with Parallel Control
Gradient Descent Converges Arbitrarily Fast for Logistic Regression via Large and Adaptive Stepsizes
Mixture of Experts Provably Detect and Learn the Latent Cluster Structure in Gradient-Based Learning
(ends 1:30 PM)
1 p.m.
2 p.m.
3:30 p.m.
Orals 3:30-4:30
[3:30]
EmbodiedBench: Comprehensive Benchmarking Multi-modal Large Language Models for Vision-Driven Embodied Agents
[3:45]
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
[4:00]
CodeIO: Condensing Reasoning Patterns via Code Input-Output Prediction
[4:15]
ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Retrieval-Augmented Perception: High-resolution Image Perception Meets Visual RAG
[3:45]
AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization
[4:00]
Normalizing Flows are Capable Generative Models
[4:15]
In-Context Denoising with One-Layer Transformers: Connections between Attention and Associative Memory Retrieval
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
Learning dynamics in linear recurrent neural networks
[3:45]
LoRA Training Provably Converges to a Low-Rank Global Minimum Or It Fails Loudly (But it Probably Won't Fail)
[4:00]
LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently
[4:15]
Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
On Path to Multimodal Generalist: General-Level and General-Bench
[3:45]
What Limits Virtual Agent Application? OmniBench: A Scalable Multi-Dimensional Benchmark of Essential Virtual Agent Capabilities
[4:00]
How Do Large Language Monkeys Get Their Power (Laws)?
[4:15]
Suitability Filter: A Statistical Framework for Classifier Evaluation in Real-World Deployment Settings
(ends 4:30 PM)
Orals 3:30-4:30
[3:30]
The Value of Prediction in Identifying the Worst-Off
[3:45]
Generative Social Choice: The Next Generation
[4:00]
Statistical Collusion by Collectives on Learning Platforms
[4:15]
Prices, Bids, Values: One ML-Powered Combinatorial Auction to Rule Them All
(ends 4:30 PM)
4:30 p.m.
Posters 4:30-7:00
C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Generation
GradPS: Resolving Futile Neurons in Parameter Sharing Network for Multi-Agent Reinforcement Learning
Guided Zeroth-Order Methods for Stochastic Non-convex Problems with Decision-Dependent Distributions
SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization
(ends 7:00 PM)
FRI 18 JUL
8:30 a.m.
Workshop:
(ends 6:00 PM)
Workshop:
(ends 6:00 PM)
9 a.m.
noon
3 p.m.
SAT 19 JUL
7:30 a.m.
(ends 12:00 PM)
8:30 a.m.
Workshop:
(ends 6:00 PM)
Workshop:
(ends 6:00 PM)
9 a.m.
noon
3 p.m.