New contrastive-learning methods for better data representation

New loss functions enable better approximation of the optimal loss and more-useful representations of multimodal data.

Many recent advances in artificial intelligence are the result of representation learning: a machine learning model learns to represent data items as vectors in a multidimensional space, where geometric relationships between vectors correspond to semantic relationships between items.

The M5 team at Amazon strives to construct general-purpose semantic representations of data related to the Amazon Store — product descriptions, queries, reviews, and more — that can be employed by machine learning (ML) systems throughout Amazon. Our approach involves leveraging all accessible data for each entity, often spanning multiple modalities.

One of the most successful ways to produce general-purpose representations is through contrastive learning, in which a model is trained on pairs of inputs, which are either positive (similar inputs/products) or negative (dissimilar inputs/products). The model learns to pull positive examples together and push negative examples apart.

Related content
Four CVPR papers from Prime Video examine a broad set of topics related to efficient model training for understanding and synthesizing long-form cinematic content.

In a pair of recent papers, M5 researchers have made substantial contributions to the theory and practice of contrastive learning. In “Why do we need large batch sizes in contrastive learning? A gradient-bias perspective”, presented at the 2022 Neural Information Processing Systems (NeurIPS) conference, we propose a new contrastive-learning loss function that enables models to converge on useful representations with lower memory cost and less training data.

And in “Understanding and constructing latent modality structures in multi-modal representation learning”, presented at this year’s Computer Vision and Pattern Recognition conference (CVPR), we propose geometric constraints on the representations of different modes of the same data item — say, image and text — that are more useful for downstream tasks than simply trying to resolve both representations to the same point in the representational space.

Do we need large batch sizes in contrastive learning?

In contrast with standard ML methods, contrastive learning typically requires very large batch sizes to achieve good performance: several popular models, for instance, require tens of thousands of training examples, significantly increasing the memory overhead; reducing the batch size can impair performance. In our NeurIPS paper, we attempt to understand this phenomenon and to propose techniques for mitigating it.

Related content
Two methods presented at CVPR achieve state-of-the-art results by imposing additional structure on the representational space.

Part of the appeal of contrastive learning is that it’s unsupervised, meaning it doesn’t require data annotation. Positive pairs can be generated by mathematically transforming an “anchor sample” and pairing the transformed version with the original; negative pairs can be generated by pairing an anchor sample with transformed versions of other anchor samples. With image data, a transformation might involve re-cropping, reversing, or distorting the colors of the anchor sample; with textual data, a transformation might involve substituting synonyms for the words in a sentence.

Given a measure of similarity between vectors in the representational space, the standard loss function for contrastive learning involves a ratio whose numerator includes the similarity between an anchor sample and one of its transformations; the denominator includes the sum of the similarities of the anchor sample and all possible negative samples. The goal of training is to maximize that ratio.

In principle, given the possibility of applying transformations to negative samples, “all possible negative samples” could describe an infinite set. In practice, contrastive learning typically just relies on the negative examples available in the training batch. Hence the need for large batch sizes — to approximate an infinite sum.

contrastive_learning [Read-Only].png
The contrastive-learning framework. Approximating an infinite sum with the samples in a finite minibatch of training data can introduce gradient bias.

If the distribution of minibatch samples differs from the distribution of possible negatives, however, this approximation can bias the model. One difficulty in correcting the bias is that, because the loss function contrasts each positive pair with all possible negatives at once, in a ratio, it cannot be decomposed into a sum of sub-losses.

We address the decomposability problem using Bayesian augmentation. The general approach is that, for each anchor sample, we create a random auxiliary variable, which can be thought of as a weight applied to the anchor sample’s similarity scores. Using identity under the gamma function, we can show that the auxiliary variable follows a gamma distribution, which is easy to sample. As a consequence, we can rewrite the loss in an exponential rather than a fractional form, making it decomposable.

During training, we begin by sampling the auxiliary variables for the current batch of data from a gamma distribution, giving us the weight of the similarity scores for all the anchor samples. Conditioned on the sampled values, we then apply maximum likelihood estimation to optimize the parameters of the model, which will consider the sampled weights on the similarity scores from the first step. We then repeat this process for the entire dataset, summing a sequence of (weighted) sub-losses to produce a cumulative loss. In our paper, we show that this procedure will converge toward the expected loss for the original contrastive-loss function, with its infinite sum in the denominator.

Contrastive-learning losses.png
Results of 10 training runs on synthetic data with added noise, comparing a model trained with our decomposable loss function (red) to one trained with the conventional loss function (blue). With our loss, the model consistently converged to the optimum (1.0), while with the conventional loss, it never did.

We evaluate our approach through a number of experiments. In one, we used simulated data, into which we injected noise to simulate bias. Then we used both our loss and the conventional loss function to train a model 10 times, with different initialization values. At heavy noise levels, the model trained with the conventional loss failed to converge, while ours consistently converged to the optimum.

We also evaluated the models on a variety of downstream tasks, including zero-/few-shot image classification and image/text retrieval. Our approach showed significant performance improvement over state-of-the-art baseline methods.

What geometries work best for multimodal representation matching?

At M5, we are building scalable models that can handle multimodal data — for instance, multilingual models that translate between product descriptions in different languages or multi-entity models that jointly model different images of the same product. Contrastive learning is a promising method for building such models: data in different modalities that are associated with the same products can be treated as positive pairs, and contrastive learning pulls them together in the representational space.

Related content
A new metric-learning loss function groups together superclasses and learns commonalities within them.

We theoretically investigated whether the standard contrastive-learning framework is optimal in terms of the prediction error rate on downstream tasks, and the surprising answer is no. In our CVPR paper, we prove that if the information gap between two modalities is large — that is, if you can’t infer much about one modality from the other — then the best prediction error we can hope to achieve using standard contrastive-learning representations is larger than that we can achieve if we simply train a machine learning model directly on data in a single modality.

This makes some intuitive sense. Ideally, contrastive learning would pull the different modalities so tightly together that they would essentially resolve to a single point in the representational space. But of course, the reason to use multimodal representations for downstream tasks is that each modality may capture useful information that the other does not. Collapsing the different modalities’ representations together neutralizes this advantage.

Consequently, in our CVPR paper, we explore different geometrical relationships in the representational space that can establish correlations between multimodal data without sacrificing information specific to each mode. We propose three general approaches to constructing modality structures in the representational space, suited to intramodal representation, intermodal representation, and a combination of the two:

  1. a deep feature separation loss for intramodality regularization, which uses two types of neural network components to separate different modality information: one component captures information that’s shared between modalities (tuned according to the standard contrastive-learning loss), and the other, which is orthogonal to the first, captures information unique to the modality;
  2. a “Brownian-bridge” loss for intermodality regularization, which uses Brownian motion to plot several trajectories/transitions between the representation of one modality (say, text) and the other (say, an image) and constrains representations of augmented data to lie along one of those paths; and
  3. a geometric-consistency loss for both intra- and intermodality regularization, which enforces symmetry in the geometric relationships between representations in one modality and the corresponding representations in the other modality, while simultaneously enforcing symmetries in cross-modal geometric relationships.
Contrastive learning.png
Three types of modality structures that can improve modality representation learning for downstream tasks. (1) With deep feature separation, a model produces two orthogonal vectors for each modality, one that encodes information shared across modalities and one that encodes mode-specific information. (2) Brownian bridges use Brownian motion to generate trajectories/transitions between representations of data in different modes, defining a subspace in which the representations of augmented data are induced to lie. (3) Geometric consistency enforces symmetries in the relationships between data representations, both within modes (orange-orange and blue-blue) and across modes (blue-orange).

We have conducted extensive experiments on two popular multimodal representation-learning frameworks, the CLIP-based two-tower model and the ALBEF-based fusion model. We tested our model on a variety of tasks, including zero-/few-shot image classification, image-text retrieval, visual question answering, visual reasoning, and visual entailment. Our method achieves consistent improvements over existing methods, demonstrating the effectiveness and generalizability of our proposed approach on multimodal representation learning.

Going forward

Our NeurIPS and CVPR papers represent only two interesting projects from our M5 team. There is a lot more research on multimodal learning going on in M5. This includes generative models for images, videos, and text (e.g. Stable Diffusion, DreamBooth) to enable data synthesis and representation learning and training and applying large language models to enhance customer shopping experiences. We expect to report on more research highlights in the near future.

Research areas

Related content

US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
IN, HR, Gurugram
We're on a journey to build something new a green field project! Come join our team and build new discovery and shopping products that connect customers with their vehicle of choice. We're looking for a talented Senior Applied Scientist to join our team of product managers, designers, and engineers to design, and build innovative automotive-shopping experiences for our customers. This is a great opportunity for an experienced engineer to design and implement the technology for a new Amazon business. We are looking for a Applied Scientist to design, implement and deliver end-to-end solutions. We are seeking passionate, hands-on, experienced and seasoned Senior Applied Scientist who will be deep in code and algorithms; who are technically strong in building scalable computer vision machine learning systems across item understanding, pose estimation, class imbalanced classifiers, identification and segmentation.. You will drive ideas to products using paradigms such as deep learning, semi supervised learning and dynamic learning. As a Senior Applied Scientist, you will also help lead and mentor our team of applied scientists and engineers. You will take on complex customer problems, distill customer requirements, and then deliver solutions that either leverage existing academic and industrial research or utilize your own out-of-the-box but pragmatic thinking. In addition to coming up with novel solutions and prototypes, you will directly contribute to implementation while you lead. A successful candidate has excellent technical depth, scientific vision, project management skills, great communication skills, and a drive to achieve results in a unified team environment. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a bold disruptor, prolific innovator, and a reputed problem solver—someone who truly enables AI and robotics to significantly impact the lives of millions of consumers. Key job responsibilities Architect, design, and implement Machine Learning models for vision systems on robotic platforms Optimize, deploy, and support at scale ML models on the edge. Influence the team's strategy and contribute to long-term vision and roadmap. Work with stakeholders across , science, and operations teams to iterate on design and implementation. Maintain high standards by participating in reviews, designing for fault tolerance and operational excellence, and creating mechanisms for continuous improvement. Prototype and test concepts or features, both through simulation and emulators and with live robotic equipment Work directly with customers and partners to test prototypes and incorporate feedback Mentor other engineer team members. A day in the life - 6+ years of building machine learning models for retail application experience - PhD, or Master's degree and 6+ years of applied research experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning - Demonstrated expertise in computer vision and machine learning techniques.
US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field or relevant science experience (publications/scientific prototypes) in lieu of Masters - Experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment - Papers published in AI/ML venues of repute
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
IN, KA, Bengaluru
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The ATT team, based in Bangalore, is responsible for ensuring that ads are relevant and is of good quality, leading to higher conversion for the sellers and providing a great experience for the customers. We deal with one of the world’s largest product catalog, handle billions of requests a day with plans to grow it by order of magnitude and use automated systems to validate tens of millions of offers submitted by thousands of merchants in multiple countries and languages. In this role, you will build and develop ML models to address content understanding problems in Ads. These models will rely on a variety of visual and textual features requiring expertise in both domains. These models need to scale to multiple languages and countries. You will collaborate with engineers and other scientists to build, train and deploy these models. As part of these activities, you will develop production level code that enables moderation of millions of ads submitted each day.
US, WA, Seattle
The Search Supply & Experiences team, within Sponsored Products, is seeking an Applied Scientist to solve challenging problems in natural language understanding, personalization, and other areas using the latest techniques in machine learning. In our team, you will have the opportunity to create new ads experiences that elevate the shopping experience for our hundreds of millions customers worldwide. As an Applied Scientist, you will partner with other talented scientists and engineers to design, train, test, and deploy machine learning models. You will be responsible for translating business and engineering requirements into deliverables, and performing detailed experiment analysis to determine how shoppers and advertisers are responding to your changes. We are looking for candidates who thrive in an exciting, fast-paced environment and who have a strong personal interest in learning, researching, and creating new technologies with high customer impact. Key job responsibilities As an Applied Scientist on the Search Supply & Experiences team you will: - Perform hands-on analysis and modeling of enormous datasets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Drive end-to-end machine learning projects that have a high degree of ambiguity, scale, and complexity. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Design and run experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Stay up to date on the latest advances in machine learning. About the team We are a customer-obsessed team of engineers, technologists, product leaders, and scientists. We are focused on continuous exploration of contexts and creatives where advertising delivers value to shoppers and advertisers. We specifically work on new ads experiences globally with the goal of helping shoppers make the most informed purchase decision. We obsess about our customers and we are continuously innovating on their behalf to enrich their shopping experience on Amazon
US, WA, Seattle
Have you ever wondered how Amazon launches and maintains a consistent customer experience across hundreds of countries and languages it serves its customers? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! At Amazon, we strive to be Earth's most customer-centric company, where both internal and external customers can find and discover anything they want in their own language of preference. Our Translations Services (TS) team plays a pivotal role in expanding the reach of our marketplace worldwide and enables thousands of developers and other stakeholders (Product Managers, Program Managers, Linguists) in developing locale specific solutions. Amazon Translations Services (TS) is seeking an Applied Scientist to be based in our Seattle office. As a key member of the Science and Engineering team of TS, this person will be responsible for designing algorithmic solutions based on data and mathematics for translating billions of words annually across 130+ and expanding set of locales. The successful applicant will ensure that there is minimal human touch involved in any language translation and accurate translated text is available to our worldwide customers in a streamlined and optimized manner. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way customers and stakeholders engage with Amazon and our platform worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Key job responsibilities * Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language translation-related challenges in the eCommerce space. * Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. * Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. * Continuously explore and evaluate state-of-the-art modeling techniques and methodologies to improve the accuracy and efficiency of language translation-related systems. * Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team We are a start-up mindset team. As the long-term technical strategy is still taking shape, there is a lot of opportunity for this fresh Science team to innovate by leveraging Gen AI technoligies to build scalable solutions from scratch. Our Vision: Language will not stand in the way of anyone on earth using Amazon products and services. Our Mission: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.