Papers
arxiv:2208.08681

Communication-Efficient Decentralized Online Continuous DR-Submodular Maximization

Published on Aug 18, 2022
Authors:
,
,
,
,

Abstract

Two communication-efficient decentralized online algorithms, Mono-DMFW and DOBGA, are introduced for monotone continuous DR-submodular maximization, achieving optimal regret bounds with reduced gradient evaluations and communication complexity.

AI-generated summary

Maximizing a monotone submodular function is a fundamental task in machine learning, economics, and statistics. In this paper, we present two communication-efficient decentralized online algorithms for the monotone continuous DR-submodular maximization problem, both of which reduce the number of per-function gradient evaluations and per-round communication complexity from T^{3/2} to 1. The first one, One-shot Decentralized Meta-Frank-Wolfe (Mono-DMFW), achieves a (1-1/e)-regret bound of O(T^{4/5}). As far as we know, this is the first one-shot and projection-free decentralized online algorithm for monotone continuous DR-submodular maximization. Next, inspired by the non-oblivious boosting function zhang2022boosting, we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm, which attains a (1-1/e)-regret of O(T). To the best of our knowledge, this is the first result to obtain the optimal O(T) against a (1-1/e)-approximation with only one gradient inquiry for each local objective function per step. Finally, various experimental results confirm the effectiveness of the proposed methods.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2208.08681 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2208.08681 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2208.08681 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.