Wanco: Weak Adversarial Networks for Constrained Optimization Problems

30 Pages Posted: 10 May 2025

See all articles by Gang Bao

Gang Bao

Zhejiang University

Dong Wang

affiliation not provided to SSRN

Boyi Zou

affiliation not provided to SSRN

Abstract

This paper focuses on integrating neural networks and adversarial training to develop a framework algorithm for constrained optimization problems. For such problems, we first transform them into minimax problems using the augmented Lagrangian method and then use two (or several) deep neural networks to represent the primal and dual variables, respectively. The parameters in the neural networks are then trained by an adversarial process. Compared to penalty-based deep learning methods, the proposed architecture exhibits enhanced insensitivity to constraint value scales and enforces constraints more effectively through Lagrange multipliers. Extensive examples for optimization problems with scalar constraints, nonlinear vector constraints, partial differential equation constraints, and inequality constraints are considered to show the capability and robustness of the proposed method, with applications ranging from Ginzburg--Landau energy minimization problems, partition problems, fluid-solid topology optimization, to obstacle problems.

Keywords: Deep neural networks, Constrained optimization, Augmented Lagrangian method, Adversarial neural networks

Suggested Citation

Bao, Gang and Wang, Dong and Zou, Boyi, Wanco: Weak Adversarial Networks for Constrained Optimization Problems. Available at SSRN: https://ssrn.com/abstract=5249964 or http://dx.doi.org/10.2139/ssrn.5249964

Gang Bao

Zhejiang University ( email )

38 Zheda Road
Hangzhou, 310058
China

Dong Wang (Contact Author)

affiliation not provided to SSRN ( email )

Boyi Zou

affiliation not provided to SSRN ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
11
Abstract Views
135
PlumX Metrics