论文标题

基于Wasserstein的非平稳性在线随机优化

Online Stochastic Optimization with Wasserstein Based Non-stationarity

论文作者

Jiang, Jiashuo, Li, Xiaocheng, Zhang, Jiawei

论文摘要

我们考虑了一个一般的在线随机优化问题,并在有限的时间段的视野中具有多个预算限制。在每个时间段内,都会揭示奖励功能和多个成本功能,并且决策者需要从凸面和紧凑型措施中指定措施,以收集奖励并消耗预算。每个成本功能对应于一个预算的消耗。在每个时期,奖励和成本函数都是从未知分布中得出的,这在整个时间内都是非平稳的。决策者的目的是最大化受预算限制的累积奖励。该配方捕获了广泛的应用程序,包括在线线性编程和网络收入管理等。在本文中,我们考虑了两个设置:(i)一个数据驱动的设置,其中真实分布尚不清楚,但可以提供先前的估计(可能不准确); (ii)一个无信息的环境,其中真实分布是完全未知的。我们提出了一项基于统一的浪费距离的措施,以量化设置(i)和设置中系统的非平稳性(ii)中先验估计值的不准确性。我们表明,拟议的措施导致了在两种情况下都具有统一性后悔的必要条件。对于设置(i),我们提出了一种新的算法,该算法采用了原始的双重视角,并将基础分布的先前信息集成到双重空间中的在线梯度下降过程。该算法也自然扩展到非信息设置(II)。在这两种设置下,我们显示相应的算法实现了最佳顺序的遗憾。在数值实验中,我们演示了如何将所提出的算法与重新解决技术自然整合,以进一步提高经验性能。

We consider a general online stochastic optimization problem with multiple budget constraints over a horizon of finite time periods. In each time period, a reward function and multiple cost functions are revealed, and the decision maker needs to specify an action from a convex and compact action set to collect the reward and consume the budget. Each cost function corresponds to the consumption of one budget. In each period, the reward and cost functions are drawn from an unknown distribution, which is non-stationary across time. The objective of the decision maker is to maximize the cumulative reward subject to the budget constraints. This formulation captures a wide range of applications including online linear programming and network revenue management, among others. In this paper, we consider two settings: (i) a data-driven setting where the true distribution is unknown but a prior estimate (possibly inaccurate) is available; (ii) an uninformative setting where the true distribution is completely unknown. We propose a unified Wasserstein-distance based measure to quantify the inaccuracy of the prior estimate in setting (i) and the non-stationarity of the system in setting (ii). We show that the proposed measure leads to a necessary and sufficient condition for the attainability of a sublinear regret in both settings. For setting (i), we propose a new algorithm, which takes a primal-dual perspective and integrates the prior information of the underlying distributions into an online gradient descent procedure in the dual space. The algorithm also naturally extends to the uninformative setting (ii). Under both settings, we show the corresponding algorithm achieves a regret of optimal order. In numerical experiments, we demonstrate how the proposed algorithms can be naturally integrated with the re-solving technique to further boost the empirical performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源