论文标题

Codex Hacks Hackerrank:记忆问题和代码综合评估框架

Codex Hacks HackerRank: Memorization Issues and a Framework for Code Synthesis Evaluation

论文作者

Karmakar, Anjan, Prenner, Julian Aron, D'Ambros, Marco, Robbes, Romain

论文摘要

该法典模型已经证明了从自然语言问题描述中综合代码中的非凡能力。但是,为了揭示未知的故障模式和隐藏的偏见,这种大规模模型必须系统地进行多种和多样化的评估研究。 在这项工作中,我们根据来自流行的竞争性编程门户网站:Hackerrank的一组115个Python问题语句评估Codex模型的代码合成功能。我们的评估表明,Codex确实精通Python,在零射击设置中解决了96%的问题,而在几次设置中的问题中的100%。但是,Codex根据我们的评估表现出明显的迹象,即产生记忆代码。这令人震惊,尤其是因为采用和使用此类模型可能会直接影响可预见的未来编写和制作代码的方式。考虑到这一点,我们进一步讨论并突出了与大型源代码模型相关的一些突出风险。最后,我们建议使用基于突变的问题陈述的变化来提出一个用于代码合成评估的框架。

The Codex model has demonstrated extraordinary competence in synthesizing code from natural language problem descriptions. However, in order to reveal unknown failure modes and hidden biases, such large-scale models must be systematically subjected to multiple and diverse evaluation studies. In this work, we evaluate the code synthesis capabilities of the Codex model based on a set of 115 Python problem statements from a popular competitive programming portal: HackerRank. Our evaluation shows that Codex is indeed proficient in Python, solving 96% of the problems in a zero-shot setting, and 100% of the problems in a few-shot setting. However, Codex exhibits clear signs of generating memorized code based on our evaluation. This is alarming, especially since the adoption and use of such models could directly impact how code is written and produced in the foreseeable future. With this in mind, we further discuss and highlight some of the prominent risks associated with large-scale models of source code. Finally, we propose a framework for code-synthesis evaluation using variations of problem statements based on mutations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源