Customer-obsessed science


Research areas
-
February 20, 2025Using large language models to generate training data and updating models through both fine tuning and reinforcement learning improves the success rate of code generation by 39%.
-
-
-
December 24, 2024
Featured news
-
2025Constrained decoding with lookahead heuristics (CDLH) is a highly effective method for aligning LLM generations to human preferences. However, the extensive lookahead rollout operations for each generated token makes CDLH prohibitively expensive, resulting in low adoption in practice. In contrast, common decoding strategies such as greedy decoding are extremely efficient, but achieve very low constraint
-
2025Ensuring that large language models (LLMs) do not generate harmful text is critical for their safe deployment. A common failure mode involves producing toxic responses to otherwise innocuous prompts. While various detoxification methods have been proposed, the underlying mechanisms that drive toxic generation in LLMs are not yet fully understood. Our work aims to provide a mechanistic understanding of toxic
-
ICSE 20252025Software developers increasingly rely on AI code generation utilities. To ensure that “good” code is accepted into the code base and “bad” code is rejected, developers must know when to trust an AI suggestion. Understanding how developers build this intuition is crucial to enhancing developer-AI collabo-rative programming. In this paper, we seek to understand how developers (1) define and (2) evaluate the
-
AAAI 2025 Workshop on AI for Social Impact2025To the best of our knowledge, this work introduces the first framework for clustering longitudinal data by leveraging time-dependent causal representation learning. Clustering longitudinal data has gained significant attention across various fields, yet traditional methods often overlook the causal structures underlying observed patterns. Understanding how covariates influence outcomes is critical for policymakers
-
2025Entity matching (EM), which identifies whether two data records refer to the same real-world entity, is crucial for knowledge base construction and enhancing data-driven AI systems. Recent advances in language models (LMs) have shown great potential in resolving entities with rich textual attributes. However, their performance heavily depends on how structured entities are "talked" through serialized text
Academia
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all