Customer-obsessed science
Research areas
-
April 7, 202613 min readHow automated reasoning reconciles the demands of security, performance, and maintainability.
-
March 20, 202615 min read
-
March 19, 202611 min read
-
February 25, 202611 min read
-
February 17, 20263 min read
Featured news
-
ISACE 20262026Agentic AI systems can access vast data but struggle to apply domain expertise, namely the contextual understanding of how to use specialized information. This paper presents a practical framework for encoding such expertise, demonstrated with the National Football League (NFL) through NFL Fantasy AI, a production system delivering analyst-grade fantasy football advice, as assessed by NFL Pro analysts.
-
CVPR 2026 EarthVision Workshop2026Building outline extraction from remote sensing imagery traditionally relies on segmentation or detection followed by post-processing to derive polygonal geometries. Despite advances in sequential prediction methods [2, 20], end-to-end extraction remains challenging, often missing buildings or requiring additional refinement steps. In this work, we reformulate building outline extraction as next-coordinate
-
ICLR 2026 Workshop on AI with Recursive Self-Improvement2026Foundation-model upgrades frequently break deployed prompt-based systems: target models differ in chat-template conventions, multimodal interfaces, context limits, and structured-output reliability. We study cross-model prompt adaptation: given a prompt program validated on a source model, produce a target-model prompt that preserves a semantic contract and an interface contract under bounded regression
-
2026We present a systematic method for pruning edges from causal graphs by leveraging tiered knowledge. We characterize conditions under which edges can be removed from a causal graph while preserving the identifiability of (conditional) causal effects. This result enables causal identification on simplified graphs that are substantially smaller than the original graphs. The approach is particularly valuable
-
2026Gradient orthogonalization is a simple strategy that shows great utility in speeding up gradient descent. The Muon optimizer (Jordan et al., 2024b) combines gradient orthogonalization with first-order momentum and achieves significant improvement in data efficiency over Adam/AdamW (Loshchilov & Hutter, 2019a) for language model training. However, when using model parallelism, gradient orthogonalization
Collaborations
View allWhether you're a faculty member or student, there are number of ways you can engage with Amazon.
View all