Skip to content

Comments

feat(aggregation): Add custom batched QP solver#596

Draft
PierreQuinton wants to merge 4 commits intomainfrom
new-qp-solver
Draft

feat(aggregation): Add custom batched QP solver#596
PierreQuinton wants to merge 4 commits intomainfrom
new-qp-solver

Conversation

@PierreQuinton
Copy link
Contributor

There is a compromise here, and it seems to be necessary. The algorithm used by qpsolver has great precision, but it cannot be run in parallel and does not compromise precision with time.

This algorithm, has less precision, it is slower for some range of m (medium, on CPU). But it is (most probably) much faster for large m on GPU. We could probably improve a bit more by implementing it as a cuda kernel.

If we want uncompromising improvement here, we will never be able to change, we are probably at the Pareto front, maximizing precision.

PierreQuinton and others added 3 commits February 24, 2026 10:31
Replaces the previous QP solver with a pure-PyTorch ADMM solver for the
projection-onto-dual-cone subproblem. Three techniques are combined:

- **Ruiz equilibration** (10 iterations): symmetrically scales G so that
  every row/column has infinity-norm ≈ 1, reducing the effective condition
  number before factorization.
- **ADMM**: splits the constrained QP into a cheap V-update (Cholesky solve
  of G_s + ρI) and a trivial Z-update (componentwise clamp onto the feasible
  set), following the OSQP formulation.
- **Adaptive ρ**: every √m iterations, scales ρ up or down by 10× when
  primal and dual residuals are severely imbalanced, triggering a cheap
  re-factorization to keep convergence well-behaved.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The ADMM solver in dual_cone.py always factorizes G_s + rho*I where
rho > 0, so it handles positive semi-definite Gramians without needing an
external regularization term. The 1e-4*I padding previously added in the
forward methods is now redundant.

Side-effect: the 1e-4*I pad was also acting as a strong preconditioner
that tightened ADMM convergence across row permutations. Without it,
permutation-invariance errors reflect the solver's actual accuracy on
ill-conditioned inputs (~2e-5 for DualProj, ~1e-5 for UPGrad). Tolerances
in the corresponding tests are updated accordingly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@PierreQuinton PierreQuinton added cc: feat Conventional commit type for new features. package: aggregation labels Feb 24, 2026
@PierreQuinton PierreQuinton changed the title Add custom batched QP solver compatible with GPU Add custom batched QP solver Feb 24, 2026
@github-actions github-actions bot changed the title Add custom batched QP solver feat(aggregation): Add custom batched QP solver Feb 24, 2026
@ValerianRey
Copy link
Contributor

Could we support both solvers so that we have one for small m and one for large m?

@PierreQuinton
Copy link
Contributor Author

Of course we can, ideally we want to explore the optimal choices and provide an automatic choice, but also provide a customizable one if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cc: feat Conventional commit type for new features. package: aggregation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants