Applications of large-scale knowledge graphs in e-commerce platforms can improve customers' shopping experiences. While existing e-commerce knowledge graphs (KGs) integrate a large volume of concepts or product attributes, they fail to discover user intentions, leaving out important information about how people think, behave, and interact with the surrounding world.
In this work, we present COSMO, a scalable system to mine user-centric commonsense knowledge from behavior data and construct industry-scale knowledge graphs to empower diverse online services. In particular, we describe a pipeline for collecting high-quality seed knowledge assertions that are distilled from large language models (LLMs) and further refined by critic classifiers trained over human-in-the-loop annotated data.
Amazon MemoryDB for Redis is a database service designed for 11 9s of durability with in-memory performance. In this paper, we describe the architecture of MemoryDB and how we leverage open-source Redis, a popular data structure store, to build an enterprise-grade cloud database. MemoryDB offloads durability concerns to a separate low-latency, durable transaction log service, allowing us to scale performance, availability, and durability independently from the in-memory execution engine. We describe how, using this architecture, we are able to remain fully compatible with Redis, while providing single-digit millisecond write and microsecond-scale read latencies, strong consistency, and high availability. MemoryDB launched in 2021.
We introduce a text-to-speech (TTS) model called BASE TTS, which stands for Big Adaptive Streamable TTS with Emergent abilities. BASE TTS is the largest TTS model to date, trained on 100K hours of public-domain speech data, achieving a new state of the art in speech naturalness. It deploys a one-billion-parameter autoregressive transformer that converts raw texts into discrete codes ("speechcodes"), followed by a convolution-based decoder that converts these speechcodes into waveforms in an incremental, streamable manner. Further, our speechcodes are built using a novel speech tokenization technique that features speaker ID disentanglement and compression with byte-pair encoding. Echoing the widely reported "emergent abilities" of large language models when trained on increasing volumes of data, we show that BASE TTS variants built with 10K+ hours and 500M+ parameters begin to demonstrate natural prosody on textually complex sentences.
Amazon Aurora Serverless is an on-demand, autoscaling configuration for Amazon Aurora with full MySQL and PostgreSQL compatibility. It automatically offers capacity scale-up/-down (i.e., vertical scaling) based on a customer database application’s needs. In this manner, it relieves the customer of the need to explicitly manage its database capacity; customers need only to specify minimum and maximum bounds using a simple-to-understand multi-resource capacity abstraction called the Aurora Capacity Unit (ACU). For customers with time-varying workloads, it offers cost savings compared to provisioned Aurora or other alternatives due to its agile and granular scaling and its usage-based charging model.
This paper describes the key ideas underlying Aurora Serverless’s resource management. To help meet its goals, Aurora Serverless adapts and fine-tunes well-established ideas related to resource oversubscription; reactive control informed by recent measurements; distributed and hierarchical decision-making; and innovations in the DB engine, OS, and hypervisor for efficiency. Perhaps the most challenging goal is to offer a consistent resource elasticity experience while operating hosts at high degrees of utilization. Aurora Serverless implements several novel ideas for striking a balance between these opposing needs.
Debugging a performance issue in databases is notoriously hard. Wouldn’t it be convenient if there were an oracle or a copilot for every database system, which users could query in natural language — "What’s wrong?", or even better, "How do we fix it?" Large language models (LLMs) would seem to be a natural surrogate for such an oracle given their ability to answer a wide range of questions by efficiently encoding a vast amount of knowledge from, e.g., a major chunk of the internet. However, prompting LLMs with database performance queries often results in "technically correct" but highly "vague" or "generic" recommendations that experienced database engineers (DBEs) typically find useless or untrustworthy.
In this work we propose Panda, a framework to provide context grounding to pretrained LLMs in order to generate more "useful" and "in-context" troubleshooting recommendations. Panda draws inspiration from the way experienced DBEs perform debugging and puts a system in place with the components necessary to robustly deploy pretrained LLMs in production for debugging.
We present Amazon Nova, a new generation of state-of-the-art foundation models that deliver frontier intelligence and industry-leading price performance. Amazon Nova Pro is a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks. Amazon Nova Lite is a low-cost multimodal model that is lightning fast for processing images, video, documents and text. Amazon Nova Micro is a text-only model that delivers our lowest-latency responses at very low cost. Amazon Nova Canvas is an image generation model that creates professional-grade images with rich customization controls. Amazon Nova Reel is a video generation model offering high-quality outputs, customization, and motion control. Our models were built responsibly and with a commitment to customer trust, security, and reliability. We report benchmarking results for core capabilities, agentic performance, long context, functional adaptation, runtime performance, and human evaluation.
Database research and development is heavily influenced by benchmarks, such as the industry-standard TPC-H and TPC-DS for analytical systems. However, these 20-year-old benchmarks capture neither how databases are deployed nor what workloads modern cloud data warehouse systems face. In this paper, we summarize well-known, confirm suspected, and unearth novel discrepancies between TPC-H/DS and actual workloads using empirical data. We base our analysis on telemetrics from Amazon Redshift, one of the largest cloud data warehouse deployments. Among other insights, we show how write-heavy data pipelines are prominent, workloads vary over time (in both load and type), queries are repetitive, and most properties of queries or workloads experience very long-tailed distributions. We conclude that data warehouse benchmarks, just like database systems, need to become more holistic and stop focusing solely on query engine performance. Finally, we publish a dataset containing query statistics for 200 randomly selected Redshift serverless and provisioned instances (each) over a three-month period, as a basis for building more-realistic benchmarks.
We propose a simple yet robust framework to nowcast recession risk at a monthly frequency in both the United States and the Euro Area. Our nowcast leverages both macroeconomic and financial conditions and is available the first business day after the reference month closes. In particular, we argue that financial conditions are not only useful for predicting future downturns — as is emphasized in the existing literature — but also for distinguishing between expansions and downturns as they unfold. We then connect our recession risk nowcast with growth at risk by drawing on the literature on distributional regressions and quantile regressions. Finally, we benchmark our nowcast with the Survey of Professional Forecasters (SPF) and show that, while both have a similar ability to identify downturns, the former is more accurate in correctly identifying periods of expansion.
Anomaly detection on graphs focuses on identifying irregular patterns or nodes within graph-structured data that deviate significantly from the norm. The technique is important due to its wide applicability in fields such as spam detection, anti-money-laundering, and network security. Two challenges to the application of anomaly detection on graphs are label imbalance and data insufficiency. The recent proliferation of generative models, especially diffusion models, suggests a solution. In this paper, we introduce a graph diffusion model in latent space, designed to alleviate the label imbalance problem. The proposed model is capable of multitask generation of graph structures and node features and demonstrates conditional generative capabilities, mitigating label imbalance by producing only positive examples. We apply the diffusion model to both homogeneous graphs and heterogeneous graphs. Through extensive experiments, we demonstrate that our proposed method offers notable improvements over conventional techniques.
Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular-data modeling, such as prediction, tabular-data synthesis, question answering, and table understanding. Each task presents unique challenges and opportunities. However, there has been no comprehensive review that summarizes and compares the key techniques, metrics, datasets, models, and optimization approaches in this research domain. This survey aims to close this gap by consolidating recent progress in these areas, offering a thorough survey and taxonomy of the datasets, metrics, and methodologies utilized.