Search results

15,385 results found
  • Daniel Khashabi, Mark Sammons, Christos Christodoulopoulos, Bhargav Mangipudi, Tom Redman, Ben Zhou, Guanheng Luo, Shaoshi Ling, Dan Roth
    LREC 2018
    2018
    Implementing a Natural Language Processing (NLP) system requires considerable engineering effort: creating data-structures to represent language constructs; reading corpora annotations into these data-structures; applying off-the-shelf NLP tools to augment the text representation; extracting features and training machine learning components; conducting experiments and computing performance statistics; and
  • Daniel Podlogar, Jacob Peddicord, Eric ChenMou, Victor Rojo, Rob Pittfield
    2018
    This repository holds short helper code samples, that demonstrate how to achieve certain functionality with enterprise Alexa skills, and in particular with Alexa for Business. Some samples are more complete, such as the Help Desk skill, but others will focus on specific components of a use case or integration.
  • Daniel Schoepe, Sean McLaughlin, Richard Cook, James Siri, Henri Yandell
    2018
    Quivela2 is a tool to verify protocols modeled as object-oriented programs. Quivela is a prototype tool for constructing proofs of the security of cryptographic protocols.
  • Muni Sakkuru, Mark Wharton, Shotaro Uchida
    2018
    The Alexa Auto SDK contains essential client-side software required to integrate Alexa into the automobile. The Auto SDK provides libraries that connect to Alexa and expose interfaces for your vehicle software to implement the platform-specific behavior for audio input, media streaming, calling through a connected phone, turn-by-turn navigation, controlling vehicle features such as heaters and lights, and
  • Jonathan Breedlove, James Siri, Nikhil Yogendra Murali, Olivia Sung, Mario Doiron
    2018
    The Alexa APIs for Java consists of Java POJOS that represent the request and response JSON of Alexa services. These models act as a core dependency for the Alexa Skills Kit Java SDK These model classes are auto-generated using the JSON schemas in the developer documentation.
  • James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal
    2018
    In this paper we introduce a new publicly available dataset for verification against textual sources, FEVER: Fact Extraction and VERification. It consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as SUPPORTED, REFUTED or NOTENOUGHINFO by annotators achieving 0.6841
  • Nikhil Yogendra Murali, Shreyas Govinda Raju, Jacob Peddicord, Mario Doiron, Kaiming Tao
    2018
    The Alexa APIs for Python consists of python classes that represent the request and response JSON of Alexa services. These models act as a core dependency for the Alexa Skills Kit Python SDK. These model classes are auto-generated using the JSON schemas in the developer documentation.
  • This repository lets you train neural networks models for performing end-to-end full-page handwriting recognition using the Apache MXNet deep learning frameworks on the IAM Dataset.
  • Noah Meyerhans, Samuel Karp, Austin Vazquez, Xibin Gao, Kern Walster, Arun Gupta, Michael Coulter, Stanislas Lange, Bobby Gammill , Yasin Turan, Volker Simonis, Henry Wang, Nikita Mochalov, Luminita Voicu, Kazuyoshi Kato, Cody Roseborough, Boris Popovschi, Antonio Ojea
    2018
    Firectl is a basic command-line tool that lets you run arbitrary Firecracker MicroVMs via the command line. This lets you run a fully functional Firecracker MicroVM, including console access, read/write access to filesystems, and network connectivity.
  • Kazuyoshi Kato, Xibin Gao, Samuel Karp, Noah Meyerhans, Erik Sipsma, Austin Vazquez, Maksym Pavlenko, Jerome Gravel-Niquet, David Son
    2018
    This package is a Go library to interact with the Firecracker API. It is designed as an abstraction of the OpenAPI-generated client that allows for convenient manipulation of Firecracker VM from Go programs. There are some Firecracker features that are not yet supported by the SDK. These are tracked as GitHub issues with the firecracker-feature label. Contributions to address missing features are welcomed
  • Radu Weiss, Raj Bennin, Takahiro Itazuri, Will Stewart, Edouard Bonlieu, Nikita Sobolev, Alexandra Iordache, Christopher Mayfield, Romaric Philogène, Alberto P. Martí
    2018
    This is the presentation website for Firecracker. We take pull request for content and FAQ improvements, as well as additons to the list of Firecracker integrations. When contributing to HTML pages in this repo, please format the entire file with the latest stable Prettier release, using the settings below for the HTML parser.
  • This repository provides resources for implementing a visual search engine. Visual search is the central component of an interface where instead of asking for something by voice or text, you show what you are looking for. When shown a real world, physical item, an AWS DeepLens device generates a feature vector representing that item. The feature vector generated by the AWS DeepLens device is sent to the
  • Jim Thario, Vinay Calastry, Jacob Peddicord, Yufei Gao, Jared Stewart, Shinya Kawaguchi, Ritchie Robershaw, Raees Iqbal, Tomohiro Matsuzawa
    2018
    Secure Packager and Encoder Key Exchange (SPEKE) is part of the AWS Elemental content encryption protection strategy for media services customers. SPEKE defines the standard for communication between AWS Media Services and digital rights management (DRM) system key servers. SPEKE is used to supply keys to encrypt video on demand (VOD) content through AWS Elemental MediaConvert and for live content through
  • James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal
    2018
    FEVER (Fact Extraction and VERification) consists of 185,445 claims generated by altering sentences extracted from Wikipedia and subsequently verified without knowledge of the sentence they were derived from. The claims are classified as Supported, Refuted or NotEnoughInfo. For the first two classes, the annotators also recorded the sentence(s) forming the necessary evidence for their judgment.
  • May 29, 2018
    As Alexa-enabled devices continue to expand into new countries, we propose an approach for quickly bootstrapping machine-learning models in new languages, with the aim of more efficiently bringing Alexa to new customers around the world.
  • May 24, 2018
    Amazon scientists are continuously expanding Alexa’s natural-language-understanding (NLU) capabilities to make Alexa smarter, more useful, and more engaging.
  • May 11, 2018
    Smart speakers, such as the Amazon Echo family of products, are growing in popularity among consumer and business audiences. In order to improve the automatic speech recognition (ASR) and full-duplex voice communication (FDVC) performance of these smart speakers, acoustical echo cancellation (AEC) and noise reduction systems are required. These systems reduce the noises and echoes that can impact operation, such as an Echo device accurately hearing the wake word “Alexa.”
  • Arpit Mittal
    May 04, 2018
    In recent years, the amount of textual information produced daily has increased exponentially. This information explosion has been accelerated by the ease with which data can be shared across the web. Most of the textual information is generated as free-form text, and only a small fraction is available in structured format (Wikidata, Freebase etc.) that can be processed and analyzed directly by machines.
  • April 25, 2018
    This morning, I am delivering a keynote talk at the World Wide Web Conference in Lyon, France, with the title, Conversational AI for Interacting with the Digital and Physical World.
  • April 12, 2018
    The Amazon Echo is a hands-free smart home speaker you control with your voice. The first important step in enabling a delightful customer experience with an Echo or other Alexa-enabled device is wake word detection, so accurate detection of “Alexa” or substitute wake words is critical. It is challenging to build a wake word system with low error rates when there are limited computation resources on the device and it's in the presence of background noise such as speech or music.
BR, SP, Sao Paulo
Amazon launched the Generative AI Innovation Center in June 2023 to help AWS customers accelerate innovation and business success with Generative AI (https://press.aboutamazon.com/2023/6/aws-announces- generative -ai -innovation center). This Innovation Center provides opportunities to innovate in a fast-paced organization that contributes to breakthrough projects and technologies that are deployed across devices and the cloud. As a data scientist, you are proficient in designing and developing advanced generative AI solutions to solve diverse customer problems. You'll work with terabytes of text, images, and other types of data to solve real-world problems through Gen AI. You will work closely with account teams and ML strategists to define the use case, and with other ML scientists and engineers on the team to design experiments and find new ways to deliver customer value. The selected person will possess technical and customer-facing skills that will enable you to be part of the AWS technical team within our solution providers ecosystem/environment as well as directly to end customers. You will be able to lead discussion with customer and partner staff and senior management. A day in the life Here at AWS, we embrace our differences. We are committed to promoting our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in more than 190 branches around the world. We have innovative benefit offerings and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon's culture of inclusion is reinforced by our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and build trust. About the team Work/life balance Our team highly values work-life balance. It's not about how many hours you spend at home or at work; it's about the flow you establish that brings energy to both parts of your life. We believe that finding the right balance between your personal and professional life is fundamental to lifelong happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own work-life balance. Mentoring and career growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and mandates and are building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one guidance and thorough but gentle code reviews. We care about your career growth and strive to assign projects based on what will help each team member become a more well-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Sao Paulo, SP, BRA
MX, DIF, Mexico City
Amazon launched the Generative AI Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As a Data Science Manager in GAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow them to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. The AWS Global Support team interacts with leading companies and believes that world-class support is critical to customer success. AWS Support also partners with a global list of customers that are building mission-critical applications on top of AWS services. A day in the life A day in the life Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. About the team Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. We are open to hiring candidates to work out of one of the following locations: Mexico City, DIF, MEX
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Seattle
Amazon is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong machine learning background to help build industry-leading language technology. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Generative AI, Large Language Model (LLM), Natural Language Understanding (NLU), Machine Learning (ML), Retrieval-Augmented Generation, Responsible AI, Agent, Evaluation, and Model Adaptation. As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding. The Science team at AWS Bedrock builds science foundations of Bedrock, which is a fully managed service that makes high-performing foundation models available for use through a unified API. We are adamant about continuously learning state-of-the-art NLP/ML/LLM technology and exploring creative ways to delight our customers. In our daily job we are exposed to large scale NLP needs and we apply rigorous research methods to respond to them with efficient and scalable innovative solutions. At AWS Bedrock, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging AWS resources, one of the world’s leading cloud companies and you’ll be able to publish your work in top tier conferences and journals. We are building a brand new team to help develop a new NLP service for AWS. You will have the opportunity to conduct novel research and influence the science roadmap and direction of the team. Come join this greenfield opportunity! Amazon Bedrock team is part of Utility Computing (UC) About the team AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Alexa Personality Fundamentals is chartered with infusing Alexa's trustworthy, reliable, considerate, smart, and playful personality. Come join us in creating the future of personality forward AI here at Alexa. Key job responsibilities As a Data Scientist with Alexa Personality, your work will involve machine learning, Large Language Model (LLM) and other generative technologies. You will partner with engineers, applied scientists, voice designers, and quality assurance to ensure that Alexa can sing, joke, and delight our customers in every interaction. You will take a central role in defining our experimental roadmap, sourcing training data, authoring annotation criteria and building automated benchmarks to track the improvement of our Alexa's personality. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, WA, Bellevue
Do you enjoy solving complex problems, driving research innovation, and creating insightful models that tackle real-world challenges? Join Amazon's Modeling and Optimization team. Our science models and data-driven solutions continuously reshape Amazon global supply chain - one of the most sophisticated networks in the world. Key job responsibilities In this role, you will use science to drive measurable improvements across customer experience, network speed, cost efficiency, safety, sustainability, and capital investment returns. You will collaborate with scientists to solve complex problems and with cross-functional teams to analyze systems and drive business value. You will develop optimization, simulation, and predictive models to identify improvement opportunities. You will develop innovative, scalable solutions. You will quantify expected improvements and evaluate trade-offs between competing objectives. You will communicate model insights to stakeholders and influence positive changes in Amazon's systems and operations. A day in the life Collaboration will be key - you will collaborate with scientists to design end-to-end solutions, work with business stakeholders to simplify and streamline processes, and partner with engineers to simplify systems and enhance their performances. The focus is on driving value through scientific thinking, technical knowledge, simplification, and cross-functional teamwork. About the team Our team of scientists specializes in network modeling, optimization, algorithms, control theory, machine learning and related disciplines. Our focus is driving supply chain improvements through applied science. By analyzing data and building insightful models, we identify opportunities and influence positive change across Amazon's end-to-end systems and operations - from vendors to customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, MA, Boston
The Artificial General Intelligence (AGI) - Automations team is developing AI technologies to automate workflows, processes for browser automation, developers and ops teams. As part of this, we are developing services and inference engine for these automation agents, and techniques for reasoning, planning, and modeling workflows. If you are interested in a startup mode team in Amazon to build the next level of agents then come join us. Scientists in AGI - Automations will develop cutting edge multimodal LLMs to observe, model and derive insights from manual workflows to automate them. You will get to work in a joint scrum with engineers for rapid invention, develop cutting edge automation agent systems, and take them to launch for millions of customers. Key job responsibilities - Build automation agents by developing novel multimodal LLMs. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience.; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. We are open to hiring candidates to work out of one of the following locations: Boston, MA, USA
US, MA, Boston
The Artificial General Intelligence (AGI) - Automations team is developing AI technologies to automate workflows, processes for browser automation, developers and ops teams. As part of this, we are developing services and inference engine for these automation agents, and techniques for reasoning, planning, and modeling workflows. If you are interested in a startup mode team in Amazon to build the next level of agents then come join us. Scientists in AGI - Automations will develop cutting edge multimodal LLMs to observe, model and derive insights from manual workflows to automate them. You will get to work in a joint scrum with engineers for rapid invention, develop cutting edge automation agent systems, and take them to launch for millions of customers. Key job responsibilities - Build automation agents by developing novel multimodal LLMs. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience.; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. We are open to hiring candidates to work out of one of the following locations: Boston, MA, USA