Amazon Nova AI Challenge teams

Learn more about the competing university teams.
10 results found
  • Virginia Tech
    We are committed to advancing AI by developing code-generating LLMs that balance innovation, security, and reliability. Our goal is to create a system that adapts to emerging challenges while maintaining strong functionality, demonstrating that safety and utility can work together to drive responsible technology development for the benefit of society.
  • Purdue University
    We are committed to enhancing the safety and security of AI models in modern software development by systematically uncovering vulnerabilities through progressive scanning. This challenge provides a competitive platform for developing state-of-the-art techniques that improve model alignment in coding tasks and beyond, driving advancements in responsible AI.
  • Czech Technical University in Prague
    We aim to develop an agentic LLM-based system that generates clean, reliable Python code while prioritizing safety and user understanding. Our focus is on preventing harmful code generation and enhancing user engagement through clear, explanatory interactions. By leveraging high-quality datasets, we ensure our model is both effective and responsible.
  • University of California, Davis
    We envision a future where AI-powered tools are both highly capable and resilient against emerging threats. By advancing techniques like automated prompt optimization, in-context adversarial attacks, and multi-agent frameworks, we aim to strengthen AI defenses and enhance security. Our mission is to drive the development of trustworthy AI by proactively identifying vulnerabilities and fostering a culture of responsible innovation.
  • Carnegie Mellon University
    Our team integrates expertise across software engineering, human-computer interaction, and machine learning, with deep specialization in ML. This unique combination enables us to advance post-training and inference-time control, driving innovation in secure, adaptive AI systems that enhance code generation while ensuring safety and reliability.
  • NOVA School of Science and Technology
    We envision advancing conversational red teaming by developing generative AI models that adaptively plan and execute sophisticated multi-turn attacks. By leveraging the dynamics of natural language communication, our approach enables interactive and iterative testing against LLM defenses, driving stronger and more resilient AI safety measures.
  • Columbia University
    We strive to build trustworthy AI systems by aligning LLMs with secure coding practices, ensuring they generate safe code while actively rejecting harmful or malicious requests. Our goal is to enhance security and resilience against evolving cyber threats, fostering responsible AI development.
  • University of Illinois at Urbana-Champaign
    We are committed to developing advanced code models that prioritize safety, security, and trustworthiness in modern software development. By integrating interdisciplinary expertise, we strive to create AI systems that generate reliable code while proactively identifying and mitigating potential risks.
  • University of Wisconsin-Madison
    We strive to develop a unified framework that integrates established red-teaming strategies while continuously evolving to generate innovative and diverse safety approaches. By advancing the reliability and security of coding LLMs, we aim to set new industry benchmarks and drive the next generation of AI safety standards.
  • University of Texas at Dallas
    We are committed to advancing research in AI security by systematically exploring adversarial capabilities in code generation models. Our goal is to develop novel techniques and establish new benchmarks to uncover vulnerabilities, ensuring the safety and reliability of LLM-assisted software development systems.
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside aRead more