June 17 - 21, 2024
Seattle, Washington
CVPR 2024

Overview

The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. On June 19th, Swami Sivasubramanian, AWS VP of AI and Data, will deliver an expo track keynote on, 'Computer vision at scale: Driving customer innovation and industry adoption'. Learn more about Amazon's accepted publications in our paper guide.

Sponsorship Details

Organizing committee

Accepted publications

Workshops and events

CVPR 2024 Event: Diversity and Inclusion for Everyone
June 19, 7:00 PM - 9:00 PM EDT
Amazon is proud to be a sponsor for the CVPR 2024 Social Event “Diversity and Inclusion for Everyone”, hosted by the organisers of Women in Computer Vision (WiCV) and LatinX in Computer Vision workshops.
CVPR 2024 Workshop on Urban Scene Modeling: Where Vision Meets Photogrammetry and Graphics
June 17
Rapid urbanization poses social and environmental challenges. Addressing these issues effectively requires access to accurate and up-to-date 3D building models, obtained promptly and cost-effectively. Urban modeling is an interdisciplinary topic among computer vision, graphics, and photogrammetry. The demand for automated interpretation of scene geometry and semantics has surged due to various applications, including autonomous navigation, augmented reality, smart cities, and digital twins. As a result, substantial research effort has been dedicated to urban scene modeling within the computer vision and graphics communities, with a particular focus on photogrammetry, which has coped with urban modeling challenges for decades. This workshop is intended to bring researchers from these communities together. Through invited talks, spotlight presentations, a workshop challenge, and a poster session, it will increase interdisciplinary interaction and collaboration among photogrammetry, computer vision and graphics. We also solicit original contributions in the areas related to urban scene modeling.

Website: https://usm3d.github.io/
CVPR 2024 Workshop on Virtual Try-On
June 17
Featured Amazon keynote speakers: Ming Lin, Amazon Scholar; Sunil Hadap, Principal Applied Scientist

Website: https://vto-cvpr24.github.io/
CVPR 2024 Workshop on the Evaluation of Generative Foundation Models
June 18
The landscape of artificial intelligence is being transformed by the advent of Generative Foundation Models (GenFMs), such as Large Language Models (LLMs) and diffusion models. GenFMs offer unprecedented opportunities to enrich human lives and transform industries. However, they also pose significant challenges, including the generation of factually incorrect or biased information, which might be potentially harmful or misleading. With the emergence of multimodal GenFMs, which leverage and generate content in an increasing number of modalities, these challenges are set to become even more complex. This emphasizes the urgent need for rigorous and effective evaluation methodologies.

The 1st Workshop on Evaluation for Generative Foundation Models at CVPR 2024 aims to build a forum to discuss ongoing efforts in industry and academia, share best practices, and engage the community in working towards more reliable and scalable approaches for GenFMs evaluation.

Website: https://evgenfm.github.io/
CVPR 2024 Workshop on Fine-Grained Visual Categorization
June 18
CVPR 2024 Workshop on Generative Models for Computer Vision
June 18
CVPR 2024 Workshop on the GroceryVision Dataset @ RetailVision
June 18
CVPR 2024 Workshop on Learning with Limited Labelled Data for Image and Video Understanding
June 18
CVPR 2024 Workshop on Prompting in Vision
June 17
This workshop aims to provide a platform for pioneers in prompting for vision to share recent advancements, showcase novel techniques and applications, and discuss open research questions about how the strategic use of prompts can unlock new levels of adaptability and performance in computer vision.

Website: https://prompting-in-vision.github.io/index_cvpr24.html
CVPR 2024 Workshop on Open-Vocabulary 3D Scene Understanding
June 18
CVPR 2024 Workshop on Multimodal Learning and Applications
June 18
CVPR 2024 Workshop on RetailVision
June 18
The rapid development in computer vision and machine learning has caused a major disruption in the retail industry in recent years. In addition to the rise of online shopping, traditional markets also quickly embraced AI-related technology solutions at the physical store level. Following the introduction of computer vision to the world of retail, a new set of challenges emerged. These challenges were further expanded with the introduction of image and video generation capabilities.

The physical domain exhibits challenges such as the detection of shopper and product interactions, fine-grained recognition of visually similar products, as well as new products that are introduced on a daily basis. The online domain contains similar challenges, but with their own twist. Product search and recognition is performed on more than 100,000 classes, each including images, textual captions, and text by users during their search. In addition to discriminative machine learning, image generation has also started being used for the generation of product images and virtual try-on.

All of these challenges are shared by different companies in the field, and are also at the heart of the computer vision community. This workshop aims to present the progress in these challenges and encourage the forming of a community for retail computer vision.

Website: https://retailvisionworkshop.github.io/
CVPR 2024 Workshop on Responsible Generative AI
June 18
Responsible Generative AI (ReGenAI) workshop aims to bring together researchers, practitioners, and industry leaders working at the intersection of generative AI, data, ethics, privacy and regulation, with the goal of discussing existing concerns, and brainstorming possible avenues forward to ensure the responsible progress of generative AI. We hope that the topics addressed in this workshop will constitute a crucial step towards ensuring a positive experience with generative AI for everyone.

Website: https://sites.google.com/view/cvpr-responsible-genai/home
CVPR 2024 Workshop on Visual Odometry and Computer Vision
June 18
Visual odometry and localization maintain an increasing interest in recent years, especially with the extensive applications on autonomous driving, augmented reality, and mobile computing. With the location information obtained through odometry, services based on location clues are also rapidly emerging. Particularly, in this workshop, we focus on mobile platform applications.

Website: https://sites.google.com/view/vocvalc2024
CVPR 2024 Workshop on What is Next in Multimodal Foundation Models?
June 18
CVPR 2024 Demo: Amazon Lens & View in Your Room
June 20
June 20-21, 11-11:30am

Amazon Lens is a feature which allows customers to search for products using their photos or live camera.

View in Your Room allows customers to preview how products like furniture would look in their home using Augmented reality.
Both features are available in the Amazon Mobile Shopping App today for anyone to use. We have videos showcasing these features available to show on conference displays, and team members can guide conference attendees to try the features out on their own devices.
CVPR 2024 Demo: Amazon Dash Cart and Amazon One
June 19 - June 21
June 19: 11:30am-12:00pm, 2:30-3pm
June 20: 11:30am-12:00pm, 1-1:30pm, 2:30-3pm
June 21: 11:30am-12:00pm, 1-1:30pm

Learn how Amazon Dash Cart and Amazon One are helping customers saving money, time and effort shopping for everyday grocery at scale, through computer vision and artificial intelligence! The Dash Cart is a smart cart that makes grocery trips faster and more personalized than ever. Find items quickly and easily. Add, remove, and weigh items right in your Dash Cart. When you're done shopping, skip the checkout line and roll out to your car. For more information, visit: https://aws.amazon.com/dash-cart/
CVPR 2024 Demo: Proteus
June 19 - June 21
June 19 12-12:30pm
June 21 12:30-1pm

Proteus is Amazon's first fully autonomous mobile robot. Historically, it’s been difficult to safely incorporate robotics where people are working in the same physical space as the robot. We believe Proteus will change that while remaining smart, safe, and collaborative.
CVPR 2024 Demo: Analyze data from AWS Databases with zero-ETL integrations
June 19 - June 20
June 19-20, 12:30-1:00pm

Making the most of your data often means using multiple AWS services. In this demo, learn about the zero-ETL integrations available for AWS Databases with AWS Analytics services and how they remove the need for you to build and manage complex data pipelines. Deep dive with a demo on how you can build your own pipeline with Amazon DynamoDB zero-ETL integration with Amazon OpenSearch.
CVPR 2024 Demo: Get started with GraphRAG on Amazon Neptune
June 19 - June 20
June 19, 11-11:30am
June 20, 1:30-2:00pm

Retrieval Augmented Generation (RAG) helps improve the accuracy of outputs from Large Language Models (LLMs) by retrieving information from authoritative, predetermined knowledge sources. However, baseline RAG may flounder when a query requires connecting disparate information or a higher-level understanding of large data sets. GraphRAG combines the power of knowledge graphs and RAG technology to improve your generative AI application’s ability to answer questions across data sets, summarize concepts across a broad corpus, and provide human readable explanations of the results, therefore, improve its accuracy and reducing hallucinations. In this flash talk, learn how to use Amazon Neptune, our high-performance graph analytics and serverless database, to get started with GraphRAG and improve the accuracy of your generative AI applications.
CVPR 2024 Demo: How to use Amazon Aurora as a Knowledge Base for Amazon Bedrock
June 19
June 19-20, 2-2:30pm

Generative AI and Foundational Models (FMs) are powerful technologies for building richer, personalized applications. With pgvector on Amazon Aurora PostgreSQL-Compatible Edition, you can access vector database capabilities to store, search, index, and query ML embeddings. Aurora is available as a Knowledge Base for Amazon Bedrock to securely connect your organization’s private data sources to FMs and enable Retrieval Augmented Generation (RAG) workflows on them. With Amazon Aurora Optimized Reads, you can boost vector search performance by up to 9x for memory-intensive workloads. In this demo, learn to integrate Aurora with Bedrock and how to utilize Optimized Reads to improve generative AI application performance.
CVPR 2024 Demo: Getting started with Amazon ElastiCache Serverless
June 19 - June 20
June 19-20, 3-3:30pm

Serverless databases free you from capacity management while providing you with the economics of pay-per-use pricing. With AWS, customers have a broad choice of serverless databases to choose from, such as Amazon Aurora, Amazon DynamoDB, Amazon Neptune, and most recently Amazon ElastiCache. In this demo, learn how you can begin to instantly scale your own databases with Amazon ElastiCache Serverless and how to utilize the feature with the new open source project Valkey.
CVPR 2024 Demo: AR-ID
June 19
June 19, 3:30-4:00pm

Feedback from employees led us to create Amazon Robotics Identification (AR ID), an AI-powered scanning capability with innovative computer vision and machine learning technology to enable easier scanning of packages in our facilities. Currently, all packages in our facilities are scanned at each destination on their journey. In fulfillment centers, this scanning is currently manual—an item arrives at a workstation, the package is picked from a bin by an employee, and using a hand scanner, the employee finds the bar code and hand-scans the item.

AR ID removes the manual scanning process by using a unique camera system that runs at 120 frames per second, giving employees greater mobility and helping reduce the risk of injury. Employees can handle the packages freely with both hands instead of one hand while holding a scanner in the other, or they can work to position the package to scan it by hand. This creates a natural movement, and the technology does its job in the background.
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making. Key job responsibilities PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience 3+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
IN, HR, Gurugram
We're on a journey to build something new a green field project! Come join our team and build new discovery and shopping products that connect customers with their vehicle of choice. We're looking for a talented Senior Applied Scientist to join our team of product managers, designers, and engineers to design, and build innovative automotive-shopping experiences for our customers. This is a great opportunity for an experienced engineer to design and implement the technology for a new Amazon business. We are looking for a Applied Scientist to design, implement and deliver end-to-end solutions. We are seeking passionate, hands-on, experienced and seasoned Senior Applied Scientist who will be deep in code and algorithms; who are technically strong in building scalable computer vision machine learning systems across item understanding, pose estimation, class imbalanced classifiers, identification and segmentation.. You will drive ideas to products using paradigms such as deep learning, semi supervised learning and dynamic learning. As a Senior Applied Scientist, you will also help lead and mentor our team of applied scientists and engineers. You will take on complex customer problems, distill customer requirements, and then deliver solutions that either leverage existing academic and industrial research or utilize your own out-of-the-box but pragmatic thinking. In addition to coming up with novel solutions and prototypes, you will directly contribute to implementation while you lead. A successful candidate has excellent technical depth, scientific vision, project management skills, great communication skills, and a drive to achieve results in a unified team environment. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a bold disruptor, prolific innovator, and a reputed problem solver—someone who truly enables AI and robotics to significantly impact the lives of millions of consumers. Key job responsibilities Architect, design, and implement Machine Learning models for vision systems on robotic platforms Optimize, deploy, and support at scale ML models on the edge. Influence the team's strategy and contribute to long-term vision and roadmap. Work with stakeholders across , science, and operations teams to iterate on design and implementation. Maintain high standards by participating in reviews, designing for fault tolerance and operational excellence, and creating mechanisms for continuous improvement. Prototype and test concepts or features, both through simulation and emulators and with live robotic equipment Work directly with customers and partners to test prototypes and incorporate feedback Mentor other engineer team members. A day in the life - 6+ years of building machine learning models for retail application experience - PhD, or Master's degree and 6+ years of applied research experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning - Demonstrated expertise in computer vision and machine learning techniques.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!