Bringing the power of deep learning to data in tables

Amazon’s TabTransformer model is now available through SageMaker JumpStart and the official release of the Keras open-source library.

In recent years, deep neural networks have been responsible for most top-performing AI systems. In particular, natural-language processing (NLP) applications are generally built atop Transformer-based language models such as BERT.

One exception to the deep-learning revolution has been applications that rely on data stored in tables, where machine learning approaches based on decision trees have tended to work better.

At Amazon Web Services, we have been working to extend Transformers from NLP to table data with TabTransformer, a novel, deep, tabular, data-modeling architecture for supervised and semi-supervised learning.

Related content
Novel pretraining method enables increases of 5% to 14% on five different evaluation metrics.

Starting today, TabTransformer is available through Amazon SageMaker JumpStart, where it can be used for both classification and regression tasks. TabTransformer can be accessed through the SageMaker JumpStart UI inside of SageMaker Studio or through Python code using SageMaker Python SDK. To get started with TabTransformer on SageMaker JumpStart, please refer to the program documentation.

We are also thrilled to see that TabTransformer has gained attention from people across industries: it has been incorporated into the official repository of Keras, a popular open-source software library for working with deep neural networks, and it has featured in posts on Towards Data Science and Medium. We also presented a paper on the work at the ICLR 2021 Workshop on Weakly Supervised Learning.

The TabTransformer solution

TabTransformer uses Transformers to generate robust data representations — embeddings — for categorical variables, or variables that take on a finite set of discrete values, such as months of the year. Continuous variables (such as numerical values) are processed in a parallel stream.

We exploit a successful methodology from NLP in which a model is pretrained on unlabeled data, to learn a general embedding scheme, then fine-tuned on labeled data, to learn a particular task. We find that this approach increases the accuracy of TabTransformer, too.

In experiments on 15 publicly available datasets, we show that TabTransformer outperforms the state-of-the-art deep-learning methods for tabular data by at least 1.0% on mean AUC, the area under the receiver-operating curve that plots false-positive rate against false-negative rate. We also show that it matches the performance of tree-based ensemble models.

Related content
The Amazon-sponsored FEVEROUS dataset and shared task challenge researchers to create more advanced fact-checking systems.

In the semi-supervised setting, when labeled data is scarce, DNNs generally outperform decision-tree-based models, because they are better able to take advantage of unlabeled data. In our semi-supervised experiments, all of the DNNs outperformed decision trees, but with our novel unsupervised pre-training procedure, TabTransformer demonstrated an average 2.1% AUC lift over the strongest DNN benchmark.

Finally, we also demonstrate that the contextual embeddings learned from TabTransformer are highly robust against both missing and noisy data features and provide better interpretability.

Tabular data

To get a sense of the problem our method addresses, consider a table where the rows represent different samples and the columns represent both sample features (predictor variables) and the sample label (the target variable). TabTransformer takes the features of each sample as input and generates an output to best approximate the corresponding label.

In a practical industry setting, where the labels are partially available (i.e., semi-supervised learning scenarios), TabTransformer can be pre-trained on all the samples without any labels and fine-tuned on the labeled samples.

Additionally, companies usually have one large table (e.g., describing customers/products) that contains multiple target variables, and they are interested in analyzing this data in multiple ways. TabTransformer can be pre-trained on the large number of unlabeled samples once and fine-tuned multiple times for multiple target variables.

The architecture of TabTransformer is shown below. In our experiments, we use standard feature-engineering techniques to transform data types such as text, zip codes, and IP addresses into either numeric or categorical features.

Graphic shows the architecture of TabTransformer.
The architecture of TabTransformer.

Pretraining procedures

We explore two different types of pre-training procedures: masked language modeling (MLM) and replaced-token detection (RTD). In MLM, for each sample, we randomly select a certain portion of features to be masked and use the embeddings of the other features to reconstruct the masked features. In RTD, for each sample, instead of masking features, we replace them with random values chosen from the same columns.

In addition to comparing TabTransformer to baseline models, we conducted a study to demonstrate the interpretability of the embeddings produced by our contextual-embedding component.

In that study, we took contextual embeddings from different layers of the Transformer and computed a t-distributed stochastic neighbor embedding (t-SNE) to visualize their similarity in function space. More precisely, after training TabTransformer, we pass the categorical features in the test data through our trained model and extract all contextual embeddings (across all columns) from a certain layer of the Transformer. The t-SNE algorithm is then used to reduce each embedding to a 2-D point in the t-SNE plot.

T-SNE plots of learned embeddings for categorical features in the dataset BankMarketing. Left: The embeddings generated from the last layer of the Transformer. Center: The embeddings before being passed into the Transformer. Right: The embeddings learned by the model without the Transformer layers.
T-SNE plots of learned embeddings for categorical features in the dataset BankMarketing. Left: The embeddings generated from the last layer of the Transformer. Center: The embeddings before being passed into the Transformer. Right: The embeddings learned by the model without the Transformer layers.

The figure above shows the 2-D visualization of embeddings from the last layer of the Transformer for the dataset bank marketing. We can see that semantically similar classes are close to each other and form clusters (annotated by a set of labels) in the embedding space.

For example, all of the client-based features (colored markers), such as job, education level, and marital status, stay close to the center, and non-client-based features (gray markers), such as month (last contact month of the year) and day (last contact day of the week), lie outside the central area. In the bottom cluster, the embedding of having a housing loan stays close to that of having defaulted, while the embeddings of being a student, single marital status, not having a housing loan, and tertiary education level are close to each other.

Related content
Watch the keynote presentation by Alex Smola, AWS vice president and distinguished scientist, presented at the AutoML@ICML2020 workshop.

The center figure is the t-SNE plot of embeddings before being passed through the Transformer (i.e., from layer 0). The right figure is the t-SNE plot of the embeddings the model produces when the Transformer layers are removed, converting it into an ordinary multilayer perceptron (MLP). In those plots, we do not observe the types of patterns seen in the left plot.

Finally, we conduct extensive experiments on 15 publicly available datasets, using both supervised and semi-supervised learning. In the supervised-learning experiment, TabTransformer matched the performance of the state-of-the-art gradient-boosted decision-tree (GBDT) model and significantly outperformed the prior DNN models TabNet and Deep VIB.

Model nameMean AUC (%)
TabTransformer82.8 ± 0.4
MLP81.8 ± 0.4
Gradient-boosted decision trees82.9 ± 0.4
Sparse MLP81.4 ± 0.4
Logistic regression80.4 ± 0.4
TabNet77.1 ± 0.5
Deep VIB80.5 ± 0.4

Model performance with supervised learning. The evaluation metric is mean standard deviation of AUC score over the 15 datasets for each model. The larger the number, the better the result. The top two numbers are bold.

In the semi-supervised-learning experiment, we pretrain two TabTransformer models on the entire unlabeled set of training data, using the MLM and RTD methods respectively; then we fine-tune both models on labeled data.

As baselines, we use the semi-supervised learning methods pseudo labeling and entropy regularization to train both a TabTransformer network and an ordinary MLP. We also train a gradient-boosted-decision-tree model using pseudo-labeling and an MLP using a pretraining method called the swap-noise denoising autoencoder.

# Labeled data50200500
TabTransformer-RTD66.6 ± 0.670.9 ± 0.673.1 ± 0.6
TabTransformer-MLM66.8 ± 0.671.0 ± 0.672.9 ± 0.6
ER-MLP65.6 ± 0.669.0 ± 0.671.0 ± 0.6
PL-MLP65.4 ± 0.668.8 ± 0.671.0 ± 0.6
ER-TabTransformer62.7 ± 0.667.1 ± 0.669.3 ± 0.6
PL-TabTransformer63.6 ± 0.667.3 ± 0.769.3 ± 0.6
DAE65.2 ± 0.568.5 ± 0.671.0 ± 0.6
PL-GBDT56.5 ± 0.563.1 ± 0.666.5 ± 0.7

Semi-supervised-learning results on six datasets, each with more than 30,000 unlabeled data points, and different number of labeled data points. Evaluation metric is mean AUC in percentage.

# Labeled data50200500
TabTransformer-RTD78.6 ± 0.681.6 ± 0.583.4 ± 0.5
TabTransformer-MLM78.5 ± 0.681.0 ± 0.682.4 ± 0.5
ER-MLP79.4 ± 0.681.1 ± 0.682.3 ± 0.6
PL-MLP79.1 ± 0.681.1 ± 0.682.0 ± 0.6
ER-TabTransformer77.9 ± 0.681.2 ± 0.682.1 ± 0.6
PL-TabTransformer77.8 ± 0.681.0 ± 0.682.1 ± 0.6
DAE78.5 ± 0.780.7 ± 0.682.2 ± 0.6
PL-GBDT73.4 ± 0.778.8 ± 0.681.3 ± 0.6

Semi-supervised learning results on nine datasets, each with fewer than 30,000 data points, and different numbers of labeled data points. Evaluation metric is mean AUC in percentage.

To gauge relative performance with different amounts of unlabeled data, we split the set of 15 datasets into two subsets. The first set consists of the six datasets that containing more than 30,000 data points. The second set includes the remaining nine datasets.

When the amount of unlabeled data is large, TabTransformer-RTD and TabTransformer-MLM significantly outperform all the other competitors. Particularly, TabTransformer-RTD/MLM improvement are at least 1.2%, 2.0%, and 2.1% on mean AUC for the scenarios of 50, 200, and 500 labeled data points, respectively. When the number of unlabeled data becomes smaller, as shown in Table 3, TabTransformer-RTD still outperforms most of its competitors but with a marginal improvement.

Acknowledgments: Ashish Khetan, Milan Cvitkovic, Zohar Karnin

Related content

US, MA, North Reading
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. The Research Science team at Amazon Robotics is seeking interns with a passion for robotic research to work on cutting edge algorithms for robotics. Our team works on challenging and high-impact projects, including allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, and learning how to grasp all the products Amazon sells. We are seeking internship candidates with backgrounds in computer vision, machine learning, resource allocation, discrete optimization, search, planning/scheduling, and reinforcement learning. As an intern you will develop a new algorithm to solve one of the challenging computer vision and manipulation problems in Amazon's robotic warehouses. Your project will fit your academic research experience and interests. You will code and test out your solutions in increasingly realistic scenarios and iterate on the idea with your mentor to find the best solution to the problem.
US, WA, Seattle
Are you excited about building high-performance robotic systems that can perceive, learn, and act intelligently alongside humans? The Robotics AI team is creating new science products and technologies that make this possible, at Amazon scale. We work at the intersection of computer vision, machine learning, robotic manipulation, navigation, and human-robot interaction.The Amazon Robotics team is seeking broad, curious applied scientists and engineering interns to join our diverse, full-stack team. In addition to designing, building, and delivering end-to-end robotic systems, our team is responsible for core infrastructure and tools that serve as the backbone of our robotic applications, enabling roboticists, applied scientists, software and hardware engineers to collaborate and deploy systems in the lab and in the field. Come join us!
US, VA, Arlington
The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. As Director for PXT Central Science Technology, you will be responsible for leading multiple teams through rapidly evolving complex demands and define, develop, deliver and execute on our science roadmap and vision. You will provide thought leadership to scientists and engineers to invent and implement scalable machine learning recommendations and data driven algorithms supporting flexible UI frameworks. You will manage and be responsible for delivering some of our most strategic technical initiatives. You will design, develop and operate new, highly scalable software systems that support Amazon’s efforts to be Earth’s Best Employer and have a significant impact on Amazon’s commitment to our employees and communities where we both serve and employ 1.3 million Amazonians. As Director of Applied Science, you will be part of the larger technical leadership community at Amazon. This community forms the backbone of the company, plays a critical role in the broad business planning, works closely with senior executives to develop business targets and resource requirements, influences our long-term technical and business strategy, helps hire and develop engineering leaders and developers, and ultimately enables us to deliver engineering innovations.This role is posted for Arlington, VA, but we are flexible on location at many of our offices in the US and Canada.
US, VA, Arlington
Employer: Amazon.com Services LLCPosition: Data Scientist IILocation: Arlington, VAMultiple Positions Available1. Manage and execute entire projects or components of large projects from start to finish including data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations.2. Oversee the development and implementation of data integration and analytic strategies to support population health initiatives.3. Leverage big data to explore and introduce areas of analytics and technologies.4. Analyze data to identify opportunities to impact populations.5. Perform advanced integrated comprehensive reporting, consultative, and analytical expertise to provide healthcare cost and utilization data and translate findings into actionable information for internal and external stakeholders.6. Oversee the collection of data, ensuring timelines are met, data is accurate and within established format.7. Act as a data and technical resource and escalation point for data issues, ensuring they are brought to resolution.8. Serve as the subject matter expert on health care benefits data modeling, system architecture, data governance, and business intelligence tools. #0000
US, TX, Dallas
Employer: Amazon.com Services LLCPosition: Data Scientist II (multiple positions available)Location: Dallas, TX Multiple Positions Available:1. Assist customers to deliver Machine Learning (ML) and Deep Learning (DL) projects from beginning to end, by aggregating data, exploring data, building and validating predictive models, and deploying completed models to deliver business impact to the organization;2. Apply understanding of the customer’s business need and guide them to a solution using AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances;3. Use Deep Learning frameworks like MXNet, PyTorch, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our customers build DL models;4. Research, design, implement and evaluate novel computer vision algorithms and ML/DL algorithms;5. Work with data architects and engineers to analyze, extract, normalize, and label relevant data;6. Work with DevOps engineers to help customers operationalize models after they are built;7. Assist customers with identifying model drift and retraining models;8. Research and implement novel ML and DL approaches, including using FPGA;9. Develop computer vision and machine learning methods and algorithms to address real-world customer use-cases; and10. Design and run experiments, research new algorithms, and work closely with engineers to put algorithms and models into practice to help solve customers' most challenging problems.11. Approximately 15% domestic and international travel required.12. Telecommuting benefits are available.#0000
US, WA, Seattle
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Manager III, Data ScienceLocation: Bellevue, WashingtonPosition Responsibilities:Manage a team of data scientists working to build large-scale, technical solutions to increase effectiveness of Amazon Fulfillment systems. Define key business goals and map them to the success of technical solutions. Aggregate, analyze and model data from multiple sources to inform business decisions. Manage and quantify improvement in the customer experience resulting from research outcomes. Develop and manage a long-term research vision and portfolio of research initiatives, with algorithms and models that to be integrated in production systems. Hire and mentor junior scientists.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000
US, VA, Arlington
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Data Scientist IILocation: Arlington, VirginiaPosition Responsibilities:Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL / ETL queries. Import processes through various company specific interfaces for accessing Oracle, RedShift, and Spark storage systems. Build relationships with stakeholders and counterparts. Analyze data for trends and input validity by inspecting univariate distributions, exploring bivariate relationships, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build models using statistical modeling, mathematical modeling, econometric modeling, network modeling, social network modeling, natural language processing, machine learning algorithms, genetic algorithms, and neural networks. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000
US, WA, Seattle
Are you motivated to explore research in ambiguous spaces? Are you interested in conducting research that will improve the employee and manager experience at Amazon? Do you want to work on an interdisciplinary team of scientists that collaborate rather than compete? Join us at PXT Central Science!The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal.We are seeking a senior Applied Scientist with expertise in more than one or more of the following areas: machine learning, natural language processing, computational linguistics, algorithmic fairness, statistical inference, causal modeling, reinforcement learning, Bayesian methods, predictive analytics, decision theory, recommender systems, deep learning, time series modeling. In this role, you will lead and support research efforts within all aspects of the employee lifecycle: from candidate identification to recruiting, to onboarding and talent management, to leadership and development, to finally retention and brand advocacy upon exit.The ideal candidate should have strong problem-solving skills, excellent business acumen, the ability to work independently and collaboratively, and have an expertise in both science and engineering. The ideal candidate is not methods-driven, but driven by the research question at hand; in other words, they will select the appropriate method for the problem, rather than searching for questions to answer with a preferred method. The candidate will need to navigate complex and ambiguous business challenges by asking the right questions, understanding what methodologies to employ, and communicating results to multiple audiences (e.g., technical peers, functional teams, business leaders).About the teamWe are a collegial and multidisciplinary team of researchers in People eXperience and Technology (PXT) that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer. We leverage data and rigorous analysis to help Amazon attract, retain, and develop one of the world’s largest and most talented workforces.
US, WA, Bellevue
Job summaryThe Global Supply Chain-ACES organization aims to raise the bar on Amazon’s customer experience by delivering holistic solutions for Global Customer Fulfillment that facilitate the effective and efficient movement of product through our supply chain. We develop strategies, processes, material handling and technology solutions, reporting and other mechanisms, which are simple, technology enabled, globally scalable, and locally relevant. We achieve this through cross-functional partnerships, listening to the needs of our customers and prioritizing initiatives to deliver maximum impact across the value chain. Within the organization, our Quality team balances tactical operation with operations partners with global engagement on programs to deliver improved inventory accuracy in our network. The organization is looking for an experienced Principal Research Scientist to partner with senior leadership to develop long term strategic solutions. As a Principal Scientist, they will lead critical initiatives for Global Supply Chain, leveraging complex data analysis and visualization to:a. Collaborate with business teams to define data requirements and processes;b. Automate data pipelines;c. Design, develop, and maintain scalable (automated) reports and dashboards that track progress towards plans;d. Define, track and report program success metrics.e. Serve as a technical science lead on our most demanding, cross-functional projects.
US, MA, Cambridge
Job summaryMULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Data Scientist IILocation: Cambridge, MassachusettsPosition Responsibilities:Utilize code (Python, R, etc.) to build ML models to solve specific business problems. Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints. Research and implement novel machine learning algorithms and models. Collaborate with researchers, software developers, and business leaders to define product requirements and provide modeling solutions. Communicate verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000