The 10 most viewed blog posts of 2025

From quantum computing breakthroughs and foundation models for robotics to the evolution of Amazon Aurora and advances in agentic AI, these are the posts that captured readers' attention in 2025.

  1. Time series forecasting has undergone a transformation with the emergence of foundation models, moving beyond traditional statistical methods that extrapolate from single time series. Building on the success of the original Chronos models — which have been downloaded over 600 million times from Hugging Face — Amazon researchers introduce Chronos-2, designed to handle arbitrary forecasting tasks in a zero-shot manner through in-context learning (ICL).

    Chronos-2 pipeline
    The complete Chronos-2 pipeline. Input time series (targets and covariates) are first normalized using a robust scaling scheme, after which a time index and mask meta features are added. The resulting sequences are split into non-overlapping patches and mapped to high-dimensional embeddings via a residual network. The core transformer stack operates on these patch embeddings and produces multi-patch quantile outputs corresponding to the future patches masked out in the input. Each transformer block alternates between time and group attention layers: the time attention layer aggregates information across patches within a single time series, while the group attention layer aggregates information across all series within a group at each patch index. The figure illustrates two multivariate time series with one known covariate each, with corresponding groups highlighted in blue and red. This example is for illustration purposes only; Chronos-2 supports arbitrary numbers of targets and optional covariates.

    Unlike its predecessors, which supported only univariate forecasting, Chronos-2 can jointly predict multiple coevolving time series (multivariate forecasting) and incorporate external factors like promotional schedules or weather conditions (covariate-informed forecasting). For example, cloud operations teams can forecast CPU usage, memory consumption, and storage I/O together, while retailers can factor in planned promotions when predicting demand. The model's group attention mechanism enables it to capture complex interactions between variables, making it particularly valuable for cold-start scenarios where limited historical data is available.

  2. Quantum computing has long promised exponentially faster computation for certain problems, but quantum devices’ extreme sensitivity to environmental noise has limited practical applications. Amazon Web Services' new Ocelot chip represents a breakthrough in addressing this challenge. Ocelot uses bosonic quantum error correction based on "cat qubits", named after Schrödinger's famous thought experiment.

    1920x1080_Ocelot.jpg
    The pair of silicon microchips that compose the Ocelot logical-qubit memory chip.

    Traditional quantum error correction methods require thousands of physical qubits per logical qubit to achieve usable error rates, creating an enormous resource overhead. Ocelot's innovative architecture exponentially suppresses bit-flip errors at the physical level while using a simple repetition code to correct phase-flip errors. This approach achieves bit-flip times approaching one second — more than a thousand times longer than conventional superconducting qubits — while maintaining phase-flip times sufficient for error correction. The result is a distance-5 error-correcting code requiring only nine qubits total, versus 49 qubits for equivalent surface code implementations.

  3. As agentic AI systems move from concept to reality, fundamental scientific questions emerge about how these systems should share information and interact strategically. Amazon Scholar Michael Kearns explores several research frontiers that will shape the development of AI agents capable of acting autonomously on users' behalf.

    One intriguing question is what language agents will speak to each other. While agents must communicate with humans in natural language, agent-to-agent communication might be more efficient in the native "language" of neural networks: embeddings, where meanings are represented as vectors in a representational space. Just as websites today offer content in multiple human languages, we may see an "agentic Web" where content is pretranslated into standardized embeddings.

    Restaurant embeddings.jpg
    In this example, the red, green, and blue points are three-dimensional embeddings of restaurants at which three people (Alice, Bob, and Chris) have eaten. (A real-world embedding, by contrast, might have hundreds of dimensions.) Each glowing point represents the center of one of the clusters, and its values summarize the restaurant preferences of the corresponding person. AI agents could use such vector representations, rather than text, to share information with each other.

    Context sharing presents another challenge: agents must balance the benefits of sharing working memory with privacy concerns. When your travel agent negotiates with a hotel booking service, how much context about your preferences should it share — and how much financial information should it withhold?

  4. Inspired by how large language models are trained on diverse text corpora, Amazon researchers developed Mitra, a tabular foundation model pretrained entirely on synthetic datasets. While this may seem counterintuitive, real-world tabular data is often limited and heterogeneous, making it more practical to simulate diverse patterns that cover a wide range of possible data distributions.

    The key insight behind Mitra is that the quality of synthetic priors determines how well the model generalizes. Effective priors yield good performance on real tasks, exhibit diversity to prevent overfitting, and offer unique patterns not found elsewhere. Mitra's training mixture includes structural causal models — which combine graphs of causal dependencies with probabilistic equations — and popular tree-based methods like gradient boosting, random forests, and decision trees.

    Mitra overview.png
    Overview of the Mitra framework. We pretrain tabular foundation models (TFMs) on a mixture of synthetic data priors, including structural causal models and tree-based models. Each dataset is split into support and query sets. Mitra supports both 2-D attention across rows and columns and 1-D row-wise attention. At inference, the model conditions on support examples from real datasets to predict query labels using in-context learning (ICL) without gradient updates.

    Released as part of AutoGluon 1.4, Mitra demonstrates state-of-the-art performance through in-context learning: it can predict labels for new datasets when conditioned on a moderate number of examples, without requiring gradient updates or task-specific training.

  5. When Amazon Aurora launched in 2015, it promised to combine the cost effectiveness of MySQL with the performance of high-end commercial databases. The key innovation was decoupling computation from storage, a departure from traditional database architectures.

    By moving durability concerns to a separate, purpose-built storage service and offloading caching and logging layers to a scale-out, self-healing system, Aurora addressed the central constraint in cloud computing: the network. This service-oriented architecture protects databases from performance variance and failures while enabling independent scaling of performance, availability, and durability.

    Amazon Aurora: Design Considerations for High Throughput Cloud-Native Relational Databases
    In their 2017 paper, Amazon researchers describe the architecture of Amazon Aurora.

    Over the past decade, Aurora has continued to evolve. Aurora Serverless, introduced in 2018, brought on-demand autoscaling that lets customers adjust computational capacity based on workload needs, using sophisticated resource management techniques including oversubscription, reactive control, and distributed decision making. As of May 2025, all Aurora offerings are now serverless: customers no longer need to choose specific server types or worry about underlying hardware, patching, or backups.

  6. Converting unstructured or poorly structured data into clean, schema-compliant records is a critical task across domains from healthcare to e-commerce. While large language models can perform this task when prompted with schema specifications, this approach has drawbacks: high costs at scale, complex prompts, and limitations on the complexity of the schemas.

    In a pair of recent papers, Amazon researchers introduced SoLM (the structured-object language model), a lightweight specialized model trained to generate objects in specific schemas using a novel self-supervised denoising method. Rather than training SoLM on clean examples, the researchers take existing structured records, introduce artificial noise, and train the model to recover the original forms. By making the noise increasingly aggressive — even completely removing structure or randomly shuffling tokens — the researchers enhance the model’s quality and teach it to operate on completely unstructured input.

    A key innovation is confidence-aware substructure beam search (CABS), which applies beam search at the level of key-value pairs rather than individual tokens, using a separately trained confidence model to predict each pair's probability. This approach dramatically improves accuracy while mitigating hallucination risks.

  7. Traditional embedding-based information retrieval compares a query vector to every possible response vector in a database, a time-consuming process as datasets grow. Amazon's GENIUS (generative universal multimodal search) model takes a different approach: instead of comparing vectors, it uses input queries to directly generate ID codes for data items.

    Comparison of search methods.png
    With embedding-based retrieval (a), a text embedding must be compared to every possible image embedding, or vice versa. With generative retrieval (b and c), by contrast, a retrieval model generates a single ID for each query. With GENIUS (c), the first digit of the ID code indicates the modality of the output.

    Presented at CVPR 2025, GENIUS is a multimodal model whose inputs and outputs can be any combination of images, texts, or image-text pairs. Two key innovations enable GENIUS's performance. The first is semantic quantization, where IDs are generated piecemeal, with each new ID segment focusing in more precisely on the target data item's location in the representational space. The second is query augmentation, which generates additional training queries by interpolating between initial queries and target IDs in the representational space, helping the model generalize to new data types.

  8. Foundation models have transformed language and computer vision, but their adoption in scientific domains like computational fluid dynamics has been slower. What will it take for them to play a more significant role in scientific applications?

    FoundationModels-AutoAirflow-16x9.general-purpose_caption.png
    A DrivAerML dataset surface plot of the normalized magnitude of wall shear stress (wall friction coefficient).

    To help answer this question, Amazon applied scientist Danielle Maddix Robinson explores foundation models’ application to time series forecasting, with both univariate and spatiotemporal data. Scientific foundation models face challenges that large language models don’t: severe data scarcity (since generating high-quality scientific data often requires expensive numerical simulations), the constraints of inviolable physical laws, and the need for robust uncertainty quantification in safety-critical applications.

    For univariate time series, Robinson and her colleagues address data scarcity with synthetic pretraining data. The resulting model demonstrated surprising strength on chaotic dynamical systems — not because it was designed for them but because of its ability to "parrot" past history without regressing to the mean, as classical methods do. For spatiotemporal forecasting in domains like weather prediction and aerodynamics, the researchers found important trade-offs between accuracy and memory consumption across different architectures, with some models better suited for short-term forecasts and others for long-term stability.

  9. Managing fleets of thousands of mobile robots in Amazon fulfillment centers requires predicting the robots’ future locations, to minimize congestion when assigning tasks and routes. But using the robots’ navigation algorithms to simulate their interactions faster than real time would be prohibitively resource intensive.

    Amazon's DeepFleet foundation models learn to predict robot locations from billions of hours of real-world navigation data collected from the million-plus robots deployed across Amazon fulfillment and sortation centers. Like language models that learn general competencies from diverse texts, DeepFleet learns general traffic flow patterns that enable it to quickly infer how situations will likely unfold and help assign tasks and route robots around congestion.

    Sample models of a fulfillment center (top) and a sortation center (bottom).
    Sample models of a fulfillment center (top) and a sortation center (bottom).

    Researchers experimented with four distinct model architectures — robot-centric, robot-floor, image-floor, and graph-floor — each offering a different answer to fundamental design questions: Should inputs represent individual robot states or whole-floor states? Should floor layouts be encoded as features, images, or graphs? How should time be handled?

  10. AI agents represent a leap forward in generative AI, a move from chat interfaces to systems that act autonomously on users' behalf — booking travel, making purchases, building software. But how do agentic systems actually work? Amazon vice president and distinguished engineer Marc Brooker demystifies the core components of agents and explains the design choices behind AWS's Bedrock AgentCore framework.

    At their heart, agents run models and tools in a loop to achieve goals. The user provides a goal; the agent uses an LLM to plan how to achieve it; and the agent repeatedly calls tools — databases, APIs, services — based on the model's instructions, updating its plan as it receives responses.

    But making such systems work in practice requires sophisticated infrastructure. AgentCore uses Firecracker microVMs to provide secure, efficient isolation for each agent session, with startup times measured in milliseconds and overhead as low as a few megabytes. The AgentCore Gateway service manages tool calls using standards like the model context protocol, translating between the LLM's outputs and tool input specifications. When no API exists for a needed action, Amazon's Nova Act enables computer use, letting agents interact with any website by pointing and clicking.

Related content

US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Fabrication group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of device fabrication techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities In this role, you will drive improvements in qubit performance by characterizing the impact of environmental and material noise on qubit dynamics. This will require designing experiments to assess the role of specific noise sources, ensuring the collection of statistically significant data through automation, analyzing the results, and preparing clear summaries for the team. Finally, you will work with hardware engineers, material scientists, and circuit designers to implement changes which mitigate the impact of the most significant noise sources. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, VA, Herndon
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. AWS Infrastructure Services Science (AISS) researches and builds machine learning models that influence the power utilization at our data centers to ensure the health of our thermal and electrical infrastructure at high infrastructure utilization. As a Data Scientist, you will work on our Science team and partner closely with other scientists and data engineers as well as Business Intelligence, Technical Program Management, and Software teams to accurately model and optimize our power infrastructure. Outputs from your models will directly influence our data center topology and will drive exceptional cost savings. You will be responsible for building data science prototypes that optimize our power and thermal infrastructure, working across AWS to solve data mapping and quality issues (e.g. predicting when we might have bad sensor readings), and contribute to our Science team vision. You are skeptical. When someone gives you a data source, you pepper them with questions about sampling biases, accuracy, and coverage. When you’re told a model can make assumptions, you actively try to break those assumptions. You have passion for excellence. The wrong choice of data could cost the business dearly. You maintain rigorous standards and take ownership of the outcome of your data pipelines and code. You do whatever it takes to add value. You don’t care whether you’re building complex ML models, writing blazing fast code, integrating multiple disparate data-sets, or creating baseline models - you care passionately about stakeholders and know that as a curator of data insight you can unlock massive cost savings and preserve customer availability. You have a limitless curiosity. You constantly ask questions about the technologies and approaches we are taking and are constantly learning about industry best practices you can bring to our team. You have excellent business and communication skills to be able to work with product owners to understand key business questions and earn the trust of senior leaders. You will need to learn Data Center architecture and components of electrical engineering to build your models. You are comfortable juggling competing priorities and handling ambiguity. You thrive in an agile and fast-paced environment on highly visible projects and initiatives. The tradeoffs of cost savings and customer availability are constantly up for debate among senior leadership - you will help drive this conversation. Key job responsibilities - Proactively seek to identify opportunities and insights through analysis and provide solutions to automate and optimize power utilization based on a broad and deep knowledge of AWS data center systems and infrastructure. - Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult customer or business problems and cases in which the solution approach is unclear. - Collaborate with Engineering teams to obtain useful data by accessing data sources and building the necessary SQL/ETL queries or scripts. - Build models and automated tools using statistical modeling, econometric modeling, network modeling, machine learning algorithms and neural networks. - Validate these models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. - Collaborate with Engineering teams to implement these models in a manner which complies with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. About the team Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Why AWS* Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Diverse Experiences* Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. *Work/Life Balance* We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. *Inclusive Team Culture* Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) conferences, inspire us to never stop embracing our uniqueness. *Mentorship and Career Growth* We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the intersection of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues
US, VA, Herndon
This position requires that the candidate selected be a US Citizen and must currently possess and maintain an active TS/SCI security clearance with polygraph. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Data Scientist to join our team at Amazon Web Services (AWS). Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? In this role, you'll work directly with customers to design, evangelize, implement, and scale AI/ML solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their AI transformation journey, providing deep expertise in data science, machine learning, generative AI, and best practices throughout the project lifecycle. As a Data Scientist within the AWS Professional Services organization, you will be proficient in architecting complex, scalable, and secure machine learning solutions tailored to meet the specific needs of each customer. You'll help customers imagine and scope the use cases that will create the greatest value for their businesses, develop statistical models and analytical frameworks, select and train the right models, and define paths to navigate technical or business challenges. Working closely with stakeholders, you'll assess current data infrastructure, perform exploratory data analysis, develop proof-of-concepts, and propose effective strategies for implementing AI and generative AI solutions at scale. You will design and run experiments, research new algorithms, extract insights from complex datasets, and find new ways of optimizing risk, profitability, and customer experience. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities - Designing and implementing complex, scalable, and secure AI/ML solutions on AWS tailored to customer needs, including developing statistical models, performing feature engineering, and selecting appropriate algorithms for specific use cases - Developing and deploying machine learning models and generative AI applications that solve real-world business problems, conducting experiments, performing rigorous statistical analysis, and optimizing for performance at scale - Collaborating with customer stakeholders to identify high-value AI/ML use cases, gather requirements, analyze data quality and availability, and propose effective strategies for implementing machine learning and generative AI solutions - Providing technical guidance on applying AI, machine learning, and generative AI responsibly and cost-efficiently, performing model validation and interpretation, troubleshooting throughout project delivery, and ensuring adherence to best practices - Acting as a trusted advisor to customers on the latest advancements in AI/ML, emerging technologies, statistical methodologies, and innovative approaches to leveraging diverse data sources for maximum business impact - Sharing knowledge within the organization through mentoring, training, creating reusable AI/ML artifacts and analytical frameworks, and working with team members to prototype new technologies and evaluate technical feasibility
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. Join a sizeable team of data scientists, research scientists, and machine learning engineers that develop computer vision models on overhead imagery for a high-impact government customer. We own the entire machine learning development life cycle, developing models on customer data: Exploring the data and brainstorming and prioritizing ideas for model development Implementing new features in our sizable code base Training models in support of experimental or performance goals T&E-ing, packaging, and delivering models We perform this work on both unclassified and classified networks, with portions of our team working on each network. We seek a new team member to work on the classified networks. Three to four days a week, you would travel to the customer site in Northern Virginia to perform tasking as described below. Weekdays when you do not travel to the customer site, you would work from your local Amazon office. You would work collaboratively with teammates to use and contribute to a well-maintained code base that the team has developed over the last several years, almost entirely in python. You would have great opportunities to learn from team members and technical leads, while also having opportunities for ownership of important project workflows. You would work with Jupyter Notebooks, the Linux command line, Apache AirFlow, GitLab, and Visual Studio Code. We are a very collaborative team, and regularly teach and learn from each other, so, if you are familiar with some of these technologies, but unfamiliar with others, we encourage you to apply - especially if you are someone who likes to learn. We are always learning on the job ourselves. Key job responsibilities With support from technical leads, carry out tasking across the entire machine learning development lifecycle to develop computer vision models on overhead imagery: - Run data conversion pipelines to transform customer data into the structure needed by models for training - Perform EDA on the customer data - Train deep neural network models on overhead imagery - Develop and implement hyper-parameter optimization strategies - Test and Evaluate models and analyze results - Package and deliver models to the customer - Incorporate model R&D from low-side researchers - Implement new features to the model development code base - Collaborate with the rest of the team on long term strategy and short-medium term implementation. - Contribute to presentations to the customer regarding the team’s work.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Selling Partner Trust & Store Integrity Science Team. We are looking for a talented scientist who is passionate to build advanced machine learning systems that help manage the safety of millions of transactions every day and scale up our operation with automation. Key job responsibilities Innovate with the latest GenAI/LLM/VLM technology to build highly automated solutions for efficient risk evaluation and automated operations Design, develop and deploy end-to-end machine learning solutions in the Amazon production environment to create impactful business value Learn, explore and experiment with the latest machine learning advancements to create the best customer experience A day in the life You will be working within a dynamic, diverse, and supportive group of scientists who share your passion for innovation and excellence. You'll be working closely with business partners and engineering teams to create end-to-end scalable machine learning solutions that address real-world problems. You will build scalable, efficient, and automated processes for large-scale data analyses, model development, model validation, and model implementation. You will also be providing clear and compelling reports for your solutions and contributing to the ongoing innovation and knowledge-sharing that are central to the team's success.
US, WA, Seattle
Are you passionate about applying machine learning and advanced statistical techniques to protect one of the world's largest online marketplaces? Do you want to be at the forefront of developing innovative solutions that safeguard Amazon's customers and legitimate sellers while ensuring a fair and trusted shopping experience? Do you thrive in a collaborative environment where diverse perspectives drive breakthrough solutions? If yes, we invite you to join the Amazon Risk Intelligence Science Team. We're seeking an exceptional scientist who can revolutionize how we protect our marketplace through intelligent automation. As a key member of our team, you'll develop and deploy state-of-the-art machine learning systems that analyze millions of seller interactions daily, ensuring the integrity and trustworthiness of Amazon's marketplace while scaling our operations to new heights. Your work will directly impact the safety and security of the shopping experience for hundreds of millions of customers worldwide, while supporting the growth of honest entrepreneurs and businesses. Key job responsibilities • Use machine learning and statistical techniques to create scalable abuse detection solutions that identify fraudulent seller behavior, account takeovers, and marketplace manipulation schemes • Innovate with the latest GenAI technology to build highly automated solutions for efficient seller verification, transaction monitoring, and risk assessment • Design, develop and deploy end-to-end machine learning solutions in the Amazon production environment to prevent and detect sophisticated abuse patterns across the marketplace • Learn, explore and experiment with the latest machine learning advancements to protect customer trust and maintain marketplace integrity while supporting legitimate selling partners • Collaborate with cross-functional teams to develop comprehensive risk models that can adapt to evolving abuse patterns and emerging threats About the team You'll be working closely with business partners and engineering teams to create end-to-end scalable machine learning solutions that address real-world problems. You will build scalable, efficient, and automated processes for large-scale data analyses, model development, model validation, and model implementation. You will also be providing clear and compelling reports for your solutions and contributing to the ongoing innovation and knowledge-sharing that are central to the team's success.