A Proximal Dual Consensus Method for Linearly Coupled Multi-Agent Non-Convex Optimization

Abstract

Motivated by large-scale signal processing and machine learning applications, this paper considers the distributed multi-agent optimization problem for a linearly constrained non-convex problem. Each of the agents owns a local cost function and local variable, but are coupled with each other due to the linear constraint. Most of the existing methods are either applicable for convex problems only or are developed under the non-convex setting subject to a specific type of linear constraint. There still lacks a distributed method for solving the linear constrained problem under the general and non-convex setting. In this paper, we propose such a method, called the proximal dual consensus (PDC) method, that combines a proximal technique and the dual consensus method. Theoretical analysis shows that the proposed PDC method can yield a Karush-Kuhn-Tucker solution of the linearly constrained non-convex problem and it has an $O(1/epsilon)$ iteration complexity, where $epsilon$ is a solution accuracy. The practical behavior of the proposed method is examined by numerical results.

Publication
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020)