This picture shows the HVAC system on the rooftop of a skyscraper
Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. Data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management.

Data-driven fault identification is key to more sustainable facilities management

How data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities of all sizes.

In a previous article on sustainable buildings, we talked about the approach of “sense, act, and scale” to drive efficiencies in buildings, and provided information using scientific publications. In this article, we will explore how data-driven analysis can help to identify fault detection and drive energy efficiencies for facilities management by providing details on:

  • Key challenges for building management and operations;
  • Building system design fundamentals;
  • Key data points to investigate faults for facilities-level sustainability; and
  • Data-driven fault identification on AWS

Global temperatures are on the rise, greenhouse gas (GHG) emissions are the primary contributor, and facilities are among the top contributors to GHG. As stipulated in the Paris Agreement, facilities need to be 30% more energy efficient and net carbon neutral by 2050. Many companies have set new targets to reduce their emissions in recent years. For example, Amazon has set out the mission to be net neutral by 2040 and, in its recent sustainability report, has touched on how the company is using innovative design to build sustainability into physical Amazon campuses.

NeurIPS competition involves reinforcement learning, with the objective of minimizing both cost and CO2 emissions.

This article provides information on how companies of all sizes can operate and maintain their existing buildings more efficiently by identifying and fixing faults using data-driven mechanisms. In this vein, Amazon is sponsoring an AI challenge at NeurIPS this year that focuses on building energy management in a smart grid. Bottom line: energy optimization of facilities must be a key component of your organization’s plan to operate more sustainably.

Related content
As office buildings become smarter, it is easier to configure them with sustainability management in mind.

Facility energy optimization provides an organization’s facilities team low-hanging-fruit opportunities for reducing costs and carbon. However, building systems do inherit many complexities that must be addressed.

Some of the key facilities-management challenges are:

  • A building’s lifespan is 50+ years, and a facility’s system sensors are typically installed on day one. Many new cloud-native sensor options come to market every year, but building management systems (BMS) aren’t open, making it difficult to modernize data architectures for building infrastructure;
  • Across any large real estate portfolio there is a wide range of technology, standards, building types, and designs that are difficult to manage over their lifecycles; 
  • Building management and automation systems require a third party to own and modify production data, and licensing fees aren’t based on consumption pricing; and 
  • Facilities teams generally lack the cloud expertise required to design a bespoke management solution, and their IT teams often don’t have product-level experience to provide as an alternative for addressing building-management needs.

Facilities management and sustainability

Facilities management teams have limited options to modify most core BMS functions.

These systems are sometimes referred to as black boxes in that they don’t have the same level of do-it-yourself features that most cloud users have come to expect. There can be contractual challenges, as well, for building tenants who don’t have access to BMS information. This is by design, primarily due to a clear operational argument that safety and security control functions should be limited to key personnel. However, this lack of access to building-performance analytics, required for enterprise-level sustainability transformations, is increasingly considered a blocker by many of our sustainability customers.

Let’s begin our analysis by looking at a building’s biggest consumer of electricity and producer of emissions: the HVAC system.

HVAC units are central to a building and constitute roughly 50% of a building’s energy consumption. As a result, they are well instrumented and generally follow a rules-based approach. The downside: this approach can lead to many false alarms and building managers rely on manual inspection and occupants to communicate important faults that require attention. Building managers and engineers focus significant time and budget on HVAC systems, but nevertheless HVAC system faults still can account for 5% to 20% of energy waste.

The most common example of an HVAC unit with which we are all familiar is an air conditioner. In a BMS, HVAC is comprised of sub-components that provide heating or cooling, ventilation (air handling units, fans) and AC (rooftop units, variable refrigerants) and more.

HVAC Units 2_220830211027 (1).png

A building’s data model, and the larger building management schema, are established when the building first opens. Alerts, alarms, and performance data are issued through the BMS and a manager will notify a building services team to take action as needed. However, as the building and infrastructure ages many alarms become endemic and are difficult to remedy. Alarm fatigue is a term often used to describe the resulting BMS operator experience.

Variable air volume (VAV) units are another important asset that help to maintain temperatures by managing local air flow. VAV units help optimize the temperature by modifying air flow as opposed to conventional air volume (CAV) units which provide a constant volume of air that only affects air temperature.

There are often hundreds of VAV units in a larger building and managing them is burdensome. Building engineers have limited time to configure each of them as building demands change and VAV unit configurations are typically left unchanged after the commissioning of the building. The result: many unseen or mysterious building faults, and the hidden loss of energy over the years.

Related content
Confronting climate change requires the participation of governments, companies, academics, civil-society organizations, and the public.

Many modern buildings are designed to accommodate whatever the building planners know at the time of commissioning. As a result, HVAC system configuration isn’t a data-driven process because operational data doesn’t yet exist. The only real incentives for HVAC system optimization typically result from failures and occupant complaints. To meet future sustainability targets, buildings must be equipped with data-driven smart configurations that can be adjusted automatically.

To achieve this, we must understand the fundamentals of air flow as we need to combine the expertise of building engineers, IoT engineers, and data engineers to resolve some of the complex air-flow challenges. This also requires an understanding of how facilities are generally managed today, which we’ll examine next.

Anatomy of facilities management

The image below shows how an air-handling unit (AHU) uses fans to distribute air through ducting. These ducts are attached to AHUs (a type of VAV unit), controlling the flow of air to specific rooms.

typical air distribution topology.png
BMS software provides tools to help operators define logical “zones” that virtually represent a given physical space. This zone approach is useful in helping operators analyze the effectiveness of a given cooling design relative to the operational requirements.

To change the temperature of a given zone (often representing a physical room), a sensor will send a notification through a building gateway and controller. This device serves as an intermediary between the BMS server and a given HVAC unit.

There is some automation built into these HVAC systems in the form of thermostats. The automation comes in the form of a given cooling unit responding to a temperature reading, calculated by the thermostat. These setpoints provide a temperature range that, when followed, provide the best performance of the system.

Setpoint typically refers to the point at which a building system is set to activate or deactivate, eg a heating system might be set to switch on if the internal temperature falls below 20°C.

VAV Terminal_220906154354.png
A controller in the VAV unit is attached to the room thermostat. Thermostats tells VAV terminals if zone temperatures are too hot, cold, or just right. The VAV unit has several key components inside: controller, actuator, damper, shaft, and reheat coil.

AHU and VAV unit control points are managed by BMS software. This software is vendor managed and the configuration of the control system is determined at building inception. The configurations can be established based on several factors: room capacity and occupancy, room location, room cooling requirement, zone requirement, and more.

To illustrate a data model that reflects the operation of the HVAC system, let’s look at the VAVs that help distribute the air and the fault-driven alerts apparent in most aging systems. It is difficult to personalize these configurations as they are not data driven and do not update automatically. Let's use the flow of air through a given building as a use case and assume its operation will have a sizable impact on the building's overall energy usage.

Damper Side-by-side_v2_220919101743.png
On the left, the damper is fully open because it is a summer day, it is hot outside, and the room is full of people. But on the right, the damper is partially open because it is a winter day and there are no people in the room, requiring minimum heat load.

There will often be multiple zone-specific faults, such as temperature or flow failures, issues with dampers or fans, software configuration errors that can lead to short-cycling of the unit(s), and communication or controller problems, which make it difficult to even identify the problem remotely. These factors all result in a low-efficiency cooling system that increases emissions, wasting energy and money.

What faults can tell you about sustainable building performance

Faults can be neglected for long periods of time, leaking invisible energy in the process.

Researchers from UC San Diego conducted a detailed data analysis (Bharathan was a co-author) of a 145,000-square-foot building. They identified 88 faults after building engineers fixed all the issues they could find. The paper estimates that fixing these faults could save 410.3 megawatt hours per year and, at a typical electrical cost of 12 cents per kilowatt hour, achieve a $492,360 savings in the first year.

According to the U.S. Environmental Protection Agency’s Greenhouse Gas Equivalencies Calculator, that’s the equivalent of 38,244 passenger car trips abated. Cisco offers another example. The company achieved a 28% reduction in electrical usage in their buildings worldwide by using an IP-Enabled Energy Management solution.

Traditional fault fixing focuses on the centralized HVAC subsystems such as AHU. Here we focus on the VAV units that are often ignored. Some of the key issues in VAV units are: air supply flow, temperature setpoints, thermostat adjustments, inappropriate cooling or stuck dampers.

Related content
Pioneering web-based PackOpt tool has resulted in an annual reduction in cardboard waste of 7% to 10% in North America, saving roughly 60,000 tons of cardboard annually.

To identify these faults, you can perform data analysis with key data attributes including temperature, heating, and cooling setpoints; upper- and lower- limit changes based on day of week; re-heat coil (on or off); occupancy sensor and settings (occupied, standby or unoccupied); damper sensor and damper settings; and pressure flow.

Using these parameters, we can define informative models. For example, you can create setpoints informed by seasonal weather data, in addition to room thermostats. You also can perform temperature data analysis against known occupancy times.

Data analysis isn’t easy at first; it’s generally not in a state where it can be readily loaded into a graph store. Oftentimes there is a lot of data transformation and IoT work required to get the data to a place where it can be analyzed by data scientists. To solve this challenge, you will need data experts, FM domain experts, cloud engineers, and someone who can bring them together to drive the right focus.

To begin, the best approach is setting up a meeting between your facilities and IT teams to start examining your building data. Some teams may grant you read-only access to the system. Otherwise, from a .CSV download of the last two to three years of building data, you can perform your analysis.

For data- driven fault identification within your facilities data, you can get started by using the Model, Cluster, and Compare (MCC approach). The primary objective of MCC is to determine clusters of zones within a building, and then use these clusters to automatically determine misconfigured, anomalous, or faulty zone controller configuration.

MCC approach to data-driven analysis

We will use a university-building example to explain the benefits of the MCC approach. The university building comprised personal offices, shared offices, kitchens, and restrooms.

In a typical room, the HVAC provides cold air during the summer. The supplied air flow is modulated to maintain the required temperature during day time, and falls back to a minimum during the night.

In the graph below, we show a room where the opposite happens because of a misconfiguration fault.

Supply Flow Graphic 1_220831110607.png
The VAV unit cools the room at night, but uses a minimal air flow during the day. The cooling temperature setpoint is 80°F from midnight until 10 a.m., and then drops to 75°F as expected. However, there is a continuous cold air supply flow of 800 cubic feet per minute (CFM) throughout the night until 11:30 a.m.

The building management contractor surmised these errors were caused due to a misunderstanding at the time of initial building commissioning. This fault was hidden within the system for years, and was identified while doing an MCC analysis.

Model

When we try to identify faults with raw sensor data, it often leads to misleading results. For example, a simple fault detection rule may generate an alarm if the temperature of a room goes beyond a threshold. The alarm may be false for any number of reasons: it could be a particularly hot day, or an event is occurring in the room. We need to look for faults that are consistent, and require human attention. Given the large number of alarms that are triggered with simple rules, such faults get overlooked.

Our MCC algorithm looks for rooms that behave differently from others over a long time-span. To compare different rooms, we create a model that captures the generic patterns of usage over months or years. Then we can compare and cluster rooms to weed out the faults.

In our algorithm, we use the measured room temperature and air flow from the HVAC to create a room energy model. The energy spent by the HVAC system on a room is proportional to the product of its temperature and airflow supplied as per the laws of thermodynamics. We use the product of two sensor measurements as the parameter to model the room because it indicates the generic patterns of use. If we find rooms whose energy patterns are substantially different, we can inspect them further.

Cluster

Room temperatures can fluctuate for natural reasons, and our fault-detection algorithm should not flag them.

The MCC algorithm clusters rooms that are similar to each other with the KMeans algorithm. The clusters naturally align rooms that are similar, for example, west-facing rooms, east-facing rooms, kitchenettes, and conference rooms. We can create these clusters manually, based on domain knowledge and usage type, or the clustering algorithm can automate this process.

Compare

Having defined configurations per cluster, the MCC algorithm then compares rooms to identify anomalies. This step ensures that natural fluctuations are ignored, and only the egregious rooms are highlighted, reducing the number of false alarms.

Intelligent rules

The MCC study created rules to detect new faults after analyzing the anomalies manually. Rules are a natural way to integrate with an existing system, and to catch similar faults that occur in the future. Rules are also interpretable by domain experts, enabling further tuning.

An interesting example of an identified fault is shown below:

Supply Flow Graphic 2_220831110647.png
The HVAC system strives to maintain the room temperature between the cooling setpoint (78F in this room) and the heating setpoint (74F). If the temperature goes beyond these setpoints, it will cool/heat the room as required. The room is excessively cooled with high air flow (800 CFM), causing the room temperature to fall below the heating setpoint, which then triggers heating. As a result of this fault, the room uses excessive energy to maintain comfort.

There were five rooms with similar issues on the same floor and 15 overall within the building. The cause of the fault: the designed air flow specifications were based on maximum occupancy. Issues such as these cause enormous energy waste, and they often go unnoticed for years.

A path forward 

In this post we’ve provided some foundational concepts to consider in how you can better use data to improve both facility performance and availability.

Whether your goal is to improve building performance in support of sustainability transformation or to improve fault detection, the path starts with modernizing the data models that support your facilities. Following a data modernization path will illustrate where the building architecture that provides the data is not meeting expectations.

As a next step, facilities and IT managers can get started by:

  • Performing a basic audit of their buildings and look for options to gather key parameter data outlined above. 
  • Consolidating data from the relevant sources, applying data standardization, and making use of the fault-detection approach outlined above. 
  • Making use of AWS Data Analytics and AWS AI/ML services to perform data analysis and apply machine learning algorithms to identify data anomalies. Amazon uses these services to manage the thousands of world-class facilities that serve our employees, customers, and communities. Learn more about our sustainable building initiatives

These steps will help identify energy hot spots and hidden faults in your facilities; facilities managers can then make use of this information to fix the relevant faults and drive facility sustainability. Finally, consider making sustainability data easily accessible to executive teams to help drive discussions and decisions on impactful carbon-abatement initiatives.

Research areas

Related content

US, MA, Westborough
Amazon is looking for talented Postdoctoral Scientists to join our Fulfillment Technology and Robotics team for a one-year, full-time research position. The Innovation Lab in BOS27 is a physical space in which new ideas can be explored, hands-on. The Lab provides easier access to tools and equipment our inventors need while also incubating critical technologies necessary for future robotic products. The Lab is intended to not only develop new technologies that can be used in future Fulfillment, Technology, and Robotics products but additionally promote deeper technical collaboration with universities from around the world. The Lab’s research efforts are focused on highly autonomous systems inclusive of robotic manipulation of packages and ASINs, multi-robot systems utilizing vertical space, Amazon integrated gantries, advancements in perception, and collaborative robotics. These five areas of research represent an impactful set of technical capabilities that when realized at a world class level will unlock our desire for a highly automated and adaptable fulfillment supply chain. As a Postdoctoral Scientist you will be developing a coordinated multi-agent system to achieve optimized trajectories under realistic constraints. The project will explore the utility of state-of-the-art methods to solve multi-agent, multi-objective optimization problems with stochastic time and location constraints. The project is motivated by a new technology being developed in the Innovation Lab to introduce efficiencies in the last-mile delivery systems. Key job responsibilities In this role you will: * Work closely with a senior science advisor, collaborate with other scientists and engineers, and be part of Amazon’s diverse global science community. * Publish your innovation in top-tier academic venues and hone your presentation skills. * Be inspired by challenges and opportunities to invent new techniques in your area(s) of expertise.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
US, CA, Palo Alto
Amazon’s Advertising Technology team builds the technology infrastructure and ad serving systems to manage billions of advertising queries every day. The result is better quality advertising for publishers and more relevant ads for customers. In this organization you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading companies. Amazon Publisher Services (APS) helps publishers of all sizes and on all channels better monetize their content through effective advertising. APS unites publishers with advertisers across devices and media channels. We work with Amazon teams across the globe to solve complex problems for our customers. The end results are Amazon products that let publishers focus on what they do best - publishing. The APS Publisher Products Engineering team is responsible for building cloud-based advertising technology services that help Web, Mobile, Streaming TV broadcasters and Audio publishers grow their business. The engineering team focuses on unlocking our ad tech on the most impactful Desktop, mobile and Connected TV devices in the home, bringing real-time capabilities to this medium for the first time. As a successful Data Scientist in our team, · You are an analytical problem solver who enjoys diving into data, is excited about investigations and algorithms, and can credibly interface between technical teams and business stakeholders. You will collaborate directly with product managers, BIEs and our data infra team. · You will analyze large amounts of business data, automate and scale the analysis, and develop metrics (e.g., user recognition, ROAS, Share of Wallet) that will enable us to continually measure the impact of our initiatives and refine the product strategy. · Your analytical abilities, business understanding, and technical aptitude will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. · You will have direct exposure to senior leadership as we communicate results and provide scientific guidance to the business. Major responsibilities include: · Utilizing code (Apache, Spark, Python, R, Scala, etc.) for analyzing data and building statistical models to solve specific business problems. · Collaborate with product, BIEs, software developers, and business leaders to define product requirements and provide analytical support · Build customer-facing reporting to provide insights and metrics which track system performance · Influence the product strategy directly through your analytical insights · Communicating verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Bellevue
mmPROS Surface Research Science seeks an exceptional Applied Scientist with expertise in optimization and machine learning to optimize Amazon's middle mile transportation network, the backbone of its logistics operations. Amazon's middle mile transportation network utilizes a fleet of semi-trucks, trains, and airplanes to transport millions of packages and other freight between warehouses, vendor facilities, and customers, on time and at low cost. The Surface Research Science team delivers innovation, models, algorithms, and other scientific solutions to efficiently plan and operate the middle mile surface (truck and rail) transportation network. The team focuses on large-scale problems in vehicle route planning, capacity procurement, network design, forecasting, and equipment re-balancing. Your role will be to build innovative optimization and machine learning models to improve driver routing and procurement efficiency. Your models will impact business decisions worth billions of dollars and improve the delivery experience for millions of customers. You will operate as part of a team of innovative, experienced scientists working on optimization and machine learning. You will work in close collaboration with partners across product, engineering, business intelligence, and operations. Key job responsibilities - Design and develop optimization and machine learning models to inform our hardest planning decisions. - Implement models and algorithms in Amazon's production software. - Lead and partner with product, engineering, and operations teams to drive modeling and technical design for complex business problems. - Lead complex modeling and data analyses to aid management in making key business decisions and set new policies. - Write documentation for scientific and business audiences. About the team This role is part of mmPROS Surface Research Science. Our mission is to build the most efficient and optimal transportation network on the planet, using our science and technology as our biggest advantage. We leverage technologies in optimization, operations research, and machine learning to grow our businesses and solve Amazon's unique logistical challenges. Scientists in the team work in close collaboration with each other and with partners across product, software engineering, business intelligence, and operations. They regularly interact with software engineering teams and business leadership.
US, WA, Seattle
Success in any organization begins with its people and having a comprehensive understanding of our workforce and how we best utilize their unique skills and experience is paramount to our future success.. Come join the team that owns the technology behind AWS People Planning products, services, and metrics. We leverage technology to improve the experience of AWS Executives, HR/Recruiting/Finance leaders, and internal AWS planning partners. A Sr. Data Scientist in the AWS Workforce Planning team, will partner with Software Engineers, Data Engineers and other Scientists, TPMs, Product Managers and Senior Management to help create world-class solutions. We're looking for people who are passionate about innovating on behalf of customers, demonstrate a high degree of product ownership, and want to have fun while they make history. You will leverage your knowledge in machine learning, advanced analytics, metrics, reporting, and analytic tooling/languages to analyze and translate the data into meaningful insights. You will have end-to-end ownership of operational and technical aspects of the insights you are building for the business, and will play an integral role in strategic decision-making. Further, you will build solutions leveraging advanced analytics that enable stakeholders to manage the business and make effective decisions, partner with internal teams to identify process and system improvement opportunities. As a tech expert, you will be an advocate for compelling user experiences and will demonstrate the value of automation and data-driven planning tools in the People Experience and Technology space. Key job responsibilities * Engineering execution - drive crisp and timely execution of milestones, consider and advise on key design and technology trade-offs with engineering teams * Priority management - manage diverse requests and dependencies from teams * Process improvements – define, implement and continuously improve delivery and operational efficiency * Stakeholder management – interface with and influence your stakeholders, balancing business needs vs. technical constraints and driving clarity in ambiguous situations * Operational Excellence – monitor metrics and program health, anticipate and clear blockers, manage escalations To be successful on this journey, you love having high standards for yourself and everyone you work with, and always look for opportunities to make our services better.
US, WA, Bellevue
Alexa is the voice activated digital assistant powering devices like Amazon Echo, Echo Dot, Echo Show, and Fire TV, which are at the forefront of this latest technology wave. To preserve our customers’ experience and trust, the Alexa Sensitive Content Intelligence (ASCI) team creates policies and builds services and tools through Machine Learning techniques to detect and mitigate sensitive content across Alexa. We are looking for an experienced Senior Applied Scientist to build industry-leading technologies in attribute extraction and sensitive content detection across all languages and countries. An Applied Scientist will be a tech lead for a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in NLP or CV related tasks. You will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. We are looking for a leader with strong technical expertise and a passion for developing science-driven solutions in a fast-paced environment. The ideal candidate will have a solid understanding of state of the art NLP, Generative AI, LLM fine-tuning, alignment, prompt engineering, benchmarking solutions, or CV and Multi-modal models, e.g., Vision Language Models (VLM), zero-shot, few-shot, and semi-supervised learning paradigms, with the ability to apply these technologies to diverse business challenges. You will leverage your deep technical knowledge, a strong foundation in machine learning and AI, and hands-on experience in building large-scale distributed systems to deliver reliable, scalable, and high-performance products. In addition to your technical expertise, you must have excellent communication skills and the ability to influence and collaborate effectively with key stakeholders. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing as a leader, this may be the place for you. Key job responsibilities You'll lead the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. You set examples for the team on good science practice and standards. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. Your work will directly impact the trust customers place in Alexa, globally. You contribute directly to our growth by hiring smart and motivated Scientists to establish teams that can deliver swiftly and predictably, adjusting in an agile fashion to deliver what our customers need. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the hiring group About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Bellevue
Why this job is awesome? This is SUPER high-visibility work: Our mission is to provide consistent, accurate, and relevant delivery information to every single page on every Amazon-owned site. MILLIONS of customers will be impacted by your contributions: The changes we make directly impact the customer experience on every Amazon site. This is a great position for someone who likes to leverage Machine learning technologies to solve the real customer problems, and also wants to see and measure their direct impact on customers. We are a cross-functional team that owns the ENTIRE delivery experience for customers: From the business requirements to the technical systems that allow us to directly affect the on-site experience from a central service, business and technical team members are integrated so everyone is involved through the entire development process. You will have a chance to develop the state-of-art machine learning, including deep learning and reinforcement learning models, to build targeting, recommendation, and optimization services to impact millions of Amazon customers. - Do you want to join an innovative team of scientists and engineers who use machine learning and statistical techniques to deliver the best delivery experience on every Amazon-owned site? - Are you excited by the prospect of analyzing and modeling terabytes of data on the cloud and create state-of-art algorithms to solve real world problems? - Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? - Do you like to innovate and simplify? If yes, then you may be a great fit to join the DEX AI team. Key job responsibilities - Research and implement machine learning techniques to create scalable and effective models in Delivery Experience (DEX) systems - Solve business problems and identify business opportunities to provide the best delivery experience on all Amazon-owned sites. - Design and develop highly innovative machine learning and deep learning models for big data. - Build state-of-art ranking and recommendations models and apply to Amazon search engine. - Analyze and understand large amounts of Amazon’s historical business data to detect patterns, to analyze trends and to identify correlations and causalities - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation