Search results

16,580 results found
  • Data Scientist
  • Data Scientist
  • Applied Scientist
  • Applied Scientist
  • Research Scientist
  • Data Scientist
  • Applied Scientist
  • Virginia Tech
    We are committed to advancing AI by developing code-generating LLMs that balance innovation, security, and reliability. Our goal is to create a system that adapts to emerging challenges while maintaining strong functionality, demonstrating that safety and utility can work together to drive responsible technology development for the benefit of society.
  • Purdue University
    We are committed to enhancing the safety and security of AI models in modern software development by systematically uncovering vulnerabilities through progressive scanning. This challenge provides a competitive platform for developing state-of-the-art techniques that improve model alignment in coding tasks and beyond, driving advancements in responsible AI.
  • Czech Technical University in Prague
    We aim to develop an agentic LLM-based system that generates clean, reliable Python code while prioritizing safety and user understanding. Our focus is on preventing harmful code generation and enhancing user engagement through clear, explanatory interactions. By leveraging high-quality datasets, we ensure our model is both effective and responsible.
  • University of California, Davis
    We envision a future where AI-powered tools are both highly capable and resilient against emerging threats. By advancing techniques like automated prompt optimization, in-context adversarial attacks, and multi-agent frameworks, we aim to strengthen AI defenses and enhance security. Our mission is to drive the development of trustworthy AI by proactively identifying vulnerabilities and fostering a culture of responsible innovation.
  • Carnegie Mellon University
    Our team integrates expertise across software engineering, human-computer interaction, and machine learning, with deep specialization in ML. This unique combination enables us to advance post-training and inference-time control, driving innovation in secure, adaptive AI systems that enhance code generation while ensuring safety and reliability.
  • University of Texas at Dallas
    We are committed to advancing research in AI security by systematically exploring adversarial capabilities in code generation models. Our goal is to develop novel techniques and establish new benchmarks to uncover vulnerabilities, ensuring the safety and reliability of LLM-assisted software development systems.
  • NOVA School of Science and Technology
    We envision advancing conversational red teaming by developing generative AI models that adaptively plan and execute sophisticated multi-turn attacks. By leveraging the dynamics of natural language communication, our approach enables interactive and iterative testing against LLM defenses, driving stronger and more resilient AI safety measures.
  • University of Wisconsin-Madison
    We strive to develop a unified framework that integrates established red-teaming strategies while continuously evolving to generate innovative and diverse safety approaches. By advancing the reliability and security of coding LLMs, we aim to set new industry benchmarks and drive the next generation of AI safety standards.
  • Columbia University
    We strive to build trustworthy AI systems by aligning LLMs with secure coding practices, ensuring they generate safe code while actively rejecting harmful or malicious requests. Our goal is to enhance security and resilience against evolving cyber threats, fostering responsible AI development.
  • The Amazon Nova AI Challenge is a global university competition to drive secure innovation in generative AI (GenAI) technology, which focuses on responsible AI and large language model (LLM) coding security.
  • This year’s Amazon Nova AI Challenge will task student teams competing against each other with the goal of making AI safer for all, with a focus on preventing AI from assisting with writing malicious code or writing code with security vulnerabilities.
  • Find answers to frequently asked questions (FAQs) about the Amazon Nova AI Challenge.
  • Staff writer
    March 10, 2025
    Inaugural global university competition focused on advancing secure, trusted AI-assisted software development.
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside aRead more