- To view the current standings of llm-based automated buggy and patched code generation and reviewing, kindly visit StackCodeGen Leaderboard
- The benchmark dataset used for evaluation can be found at StackCodeGenBench.
- To replicate the results and use StackCodeGen follow this link or go to the
reproducibility-packagedirectory. - The prompts used to evaluate the LLMs can be found in this link or go to the
promptsdirectory. - To view the results using StackCodeGen follow this link or go to the
Resultsdirectory.
Popular repositories Loading
-
stackcodegenLeaderboard
stackcodegenLeaderboard PublicLeaderborad for LLMs evaluated on StackCodeGen Benchmark
HTML
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.