Training code generation models to debug their own outputs

Using large language models to generate training data and updating models through both fine tuning and reinforcement learning improves the success rate of code generation by 39%.

Code generation — automatically translating natural-language specifications into computer code — is one of the most promising applications of large language models (LLMs). But the more complex the programming task, the more likely the LLM is to make errors.

Of course, the more complex the task, the more likely human coders are to make errors, too. That’s why debugging is an essential component of the software development pipeline. In a paper we presented at the 2024 Conference on Neural Information Processing Systems (NeurIPS), we describe a new way to train LLMs to be better debuggers while simultaneously improving code generation ability.

Previous attempts to debug code with LLMs have primarily used few-shot learning, where a few examples of successful debugs are provided, and the LLM infers the rest. In our work, by contrast, we use both supervised fine tuning (SFT) and reinforcement learning (RL) to specialize an LLM for debugging. Since debugging training data is scarce, we leveraged LLMs to create high-quality synthetic training data.

LeDex overview.png
In our work, we use both supervised fine tuning (SFT) and reinforcement learning (RL) to specialize an LLM for debugging.

We conducted a series of experiments in which LLMs were given one attempt to generate code in response to a natural-language prompt and one further attempt to debug that code. Because our models had been fine-tuned on debugging data, their initial generations were more successful than those of an LLM relying solely on prompt engineering. But with both our models and the prompt-engineering baseline, debugging always resulted in better code performance.

Self-debuggin pipelin.png
In our experiments, we gave an LLM one attempt to generate code in response to a natural-language prompt and one further attempt to debug that code.

To evaluate model performance, we used the pass@k metric, in which a model generates k implementations of a natural-language specification, and it’s accounted successful if at least one of those implementations passes a set of prespecified tests. In experiments with different code LLMs — including StarCoder-15B, CodeLlama-7B, and CodeLlama-13B — our approach improved pass@k scores by up to 39% on standard benchmark datasets such as MBPP.

Data synthesis

There are several widely used public datasets for training code generation models, which include natural-language prompts; canonical implementations of the prompts in code; and unit tests, specific sequences of inputs that can be used to test the full functional range of the generated code. But training data for debugging models is comparatively sparse.

To create our debugging dataset, we begin with several of the existing code generation datasets. We repeatedly feed each natural-language prompt in those datasets to a code generation model, resulting in a number of different generations — say, 20 — for the same prompt. Then we run the relevant unit tests on those generations, keeping only the ones that fail the tests — that is, the buggy code.

Next, we feed the buggy code to an LLM, together with the error messages it generated on the unit tests, and we prompt the LLM to explain where and why errors occurred. Finally, we take the LLM’s diagnosis and feed it, the buggy code, and the error messages back to the LLM, together with instructions to repair the bug. This is a version of chain-of-thought reasoning: prior work have shown that asking an LLM to explain the action it intends to take before it takes that action often improves performance.

LLM debugger - Prompt used to generate training data for code explanation and refinement.
Prompt used to generate training data for code explanation and refinement.

We next execute the unit tests on the revised code, this time keeping only those revisions that pass all the tests. We now have a new dataset consisting of natural-language prompts; buggy implementations of those prompts; diagnoses of the bugs; debugged code; and unit tests.

Model updates

Armed with this dataset, we’re ready to update our debugging model, using both SFT and RL. With both update methods, we experimented with training regimens in which we asked for chain-of-thought explanations before asking for code revisions and those in which we simply asked for revisions.

With SFT, we prompted the model with the natural-language instructions, the buggy code, and the error messages from the unit tests. Model outputs were evaluated according to their performance on the unit tests.

With RL, the model interacts iteratively with the training data, attempting to learn a policy that will maximize a reward function. The classic RL learning algorithms require a continuous reward function, to enable exploration of the optimization landscape.

The unit test feedback is binary, hence discrete. To overcome this limitation, in addition to success rate on the unit tests, our RL reward function also includes the revised code’s CodeBLEU score, which measures its distance from the code of the canonical examples, providing a continuous reward signal.

Unit tests are time and resource intensive to apply, so training on CodeBLEU scores also opens the possibility of training directly on the canonical examples, a much more computationally efficient process. Our experiments indicate that this approach does improve debugging performance — though not as much as training on unit test results as well.

Evaluation

In our experiments, we used three types of models: one was a vanilla LLM that relied entirely on prompt engineering; one was an LLM updated on our dataset using SFT only; and one was an LLM updated on our dataset using both SFT and RL.

We implemented each type of model using three different LLM architectures, and for each class of model, we measured three sets of outputs: an initial generation; a direct revision of the initial generation; and a revision involving chain-of-thought reasoning. Finally, we also investigated two different generation paradigms: in one, a model was given one chance to generate correct code; in the other, it was given 10 chances. This gave us a total of 24 different comparisons.

Across the board, our updated models outperformed the prompt-engineering baselines. In all but one case, the version of our model updated through both SFT and RL outperformed the version updated through SFT only. Overall, we demonstrate a scalable way to use execution feedback and canonical examples to better debug code models and improve their generation performance.

Research areas

Related content

US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist; to support the development and implementation of Generative AI (GenAI) algorithms and models for supervised fine-tuning, and advance the state of the art with Large Language Models (LLMs), As an Applied Scientist, you will play a critical role in supporting the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Sr. Applied Scientists with Recommender System or Search Ranking or Ads Ranking experience to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Recommendation/Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Recommendation/Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA Amazon is seeking an innovative and high-judgement Senior Applied Scientist to join the Privacy Engineering team in the Amazon Privacy Services org. We own products and programs that deliver technical innovation for ensuring compliance with high-impact, urgent regulation across Amazon services worldwide. The Senior Applied Scientist will contribute to the strategic direction for Amazon’s privacy practices while building/owning the compliance approach for individual regulations such as General Data Protection Regulation (GDPR), DMA, Quebec 25 etc. This will require helping to frame, and participating in, high judgment debates and decision making across senior business, technology, legal, and public policy leaders. A great candidate will have a unique combination of experience with innovative data governance technology, high judgement in system architecture decisions and ability to set detailed technical design from ambiguous compliance requirements. You will drive foundational, cross-service decisions, set technical requirements, oversee technical design, and have end to end accountability for delivering technical changes across dozens of different systems. You will have high engagement with WW senior leadership via quarterly reviews, annual organizational planning, and s-team goal updates. Key job responsibilities * Develop information retrieval benchmarks related to code analysis and invent algorithms to optimize identification of privacy requirements and controls. * Develop semantic and syntactic code analysis tools to assess privacy implementations within application code, and automatic code replacement tools to enhance privacy implementations. * Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence for privacy compliance. * Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. A day in the life Amazon Privacy Services own products and programs that deliver technical innovation for ensuring Privacy Amazon services worldwide. We are hiring an innovative and high-judgement Senior Applied Scientist to develop AI solutions for builders across Amazon’s consumer and digital businesses including but not limited to Amazon.com, Amazon Ads, Amazon Go, Prime Video, Devices and more. Our ideal candidate is creative, has excellent problem-solving skills, a solid understanding of computer science fundamentals, deep learning and a customer-focused mindset. The Senior Scientist will serve as the resident expert on the development of AI agents for privacy. They build on their experiences to develop LLMs to develop AI implementations across privacy workflows. They will have responsibilities to mentor junior scientists and engineers develop AI skills. About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Amazon's Price Perception and Evaluation team is seeking a driven Principal Applied Scientist to harness planet scale multi-modal datasets, and navigate a continuously evolving competitor landscape, in order to build and scale an advanced self-learning scientific price estimation and product understanding system, regularly generating fresh customer-relevant prices on billions of Amazon and Third Party Seller products worldwide. We are looking for a talented, organized, and customer-focused technical leader with a charter to derive deep neural product relationships, quantify substitution and complementarity effects, and publish trust-preserving probabilistic price ranges on all products listed on Amazon. This role requires an individual with excellent scientific modeling and system design skills, bar-raising business acumen, and an entrepreneurial spirit. We are looking for an experienced leader who is a self-starter comfortable with ambiguity, demonstrates strong attention to detail, and has the ability to work in a fast-paced and ever-changing environment. Key job responsibilities - Develop the team. Mentor a highly talented group of applied machine learning scientists & researchers. - See the big picture. Shape long term vision for Amazon's science-based competitive, perception-preserving pricing techniques - Build strong collaborations. Partner with product, engineering, and science teams within Pricing & Promotions to deploy machine learning price estimation and error correction solutions at Amazon scale - Stay informed. Establish mechanisms to stay up to date on latest scientific advancements in machine learning, neural networks, natural language processing, probabilistic forecasting, and multi-objective optimization techniques. Identify opportunities to apply them to relevant Pricing & Promotions business problems - Keep innovating for our customers. Foster an environment that promotes rapid experimentation, continuous learning, and incremental value delivery. - Deliver Impact. Develop, Deploy, and Scale Amazon's next generation foundational price estimation and understanding system