Manual code reviews and static code analyzers are the traditional mechanisms to verify if source code complies with coding policies. However, they are hard to scale. We formulate code compliance assessment as a machine learning (ML) problem, to take as input a natural language policy and code, and generate a prediction on the code’s compliance, non-compliance, or irrelevance. Our intention for ML-based automation is to scale the development of Amazon CodeGuru, a commercial code analyzer. We explore key research questions on model formulation, training data, and evaluation setup. We obtain a joint code-text representation space (embeddings) which preserves compliance relationships via the vector distance of code and policy embeddings. As there is no task-specific data, we re-interpret and filter commonly available software datasets with additional pretraining and pre-finetuning tasks that reduce the semantic gap. We benchmarked our approach on two listings of coding policies (CWE and CBP). This is a zero-shot evaluation as none of the policies occur in the training set. On CWE and CBP respectively, our tool Policy2Code achieves classification accuracies of (59%, 71%) and search MRR of (0.05, 0.21) compared to CodeBERT with classification accuracies of (37%, 54%) and MRR of (0.02, 0.02). In a user study, 24% Policy2Code detections were accepted compared to 7% for CodeBERT. Policy2Code is considered a useful ML-based aid to supplement manual efforts.
Research areas