论文标题
对抗性计划
Adversarial Plannning
论文作者
论文摘要
规划算法用于计算系统中,以指导自主行为。例如,在规范应用中,使用自动驾驶汽车的计划用于自动化静态或连续的计划,以实现性能,资源管理或功能目标(例如,到达目的地,管理燃料燃料消耗)。现有的计划算法假定非对抗性设置;根据可用的环境信息(即输入实例)制定了最低成本计划。然而,目前尚不清楚这种算法在试图阻止计划者的对手面前如何执行。在本文中,我们探讨了在网络和网络物理系统中使用的计划算法的安全性。我们提出两个$ \ textit {对抗性计划} $算法 - 一个静态和一个自适应 - 扰动输入计划实例以最大化成本(通常是基本上是这样)。我们根据商业应用中使用的两种主要计划算法(d* lite and fast downward)评估了算法的性能,并表明两者都容易受到极为有限的对抗性动作的影响。在这里,实验表明,对手仅通过从行动空间(d* lite)中删除单个行动,并通过仅通过仅删除三个动作(快速前进)来提高66.9%的实例计划成本。最后,我们表明,在任何基于搜索的计划系统中找到最佳的扰动都是NP-HARD。
Planning algorithms are used in computational systems to direct autonomous behavior. In a canonical application, for example, planning for autonomous vehicles is used to automate the static or continuous planning towards performance, resource management, or functional goals (e.g., arriving at the destination, managing fuel fuel consumption). Existing planning algorithms assume non-adversarial settings; a least-cost plan is developed based on available environmental information (i.e., the input instance). Yet, it is unclear how such algorithms will perform in the face of adversaries attempting to thwart the planner. In this paper, we explore the security of planning algorithms used in cyber- and cyber-physical systems. We present two $\textit{adversarial planning}$ algorithms-one static and one adaptive-that perturb input planning instances to maximize cost (often substantially so). We evaluate the performance of the algorithms against two dominant planning algorithms used in commercial applications (D* Lite and Fast Downward) and show both are vulnerable to extremely limited adversarial action. Here, experiments show that an adversary is able to increase plan costs in 66.9% of instances by only removing a single action from the actions space (D* Lite) and render 70% of instances from an international planning competition unsolvable by removing only three actions (Fast Forward). Finally, we show that finding an optimal perturbation in any search-based planning system is NP-hard.