When unsupervised training pays off in natural-language processing

At smaller vocabulary sizes, tokenizers trained on unannotated data work best.

The first step in most natural-language-processing applications is tokenization, or breaking input strings into semantically relevant units. In many applications, these units are smaller than individual words. For instance, search results that are a good match for the query “word processing” might use the phrase “word processor”, which shares some but not all of the query’s subword units.

Traditionally, tokenizers have been built or trained using manually compiled lexicons — which contain information about words’ prefixes, stems, and suffixes — and data that has been hand-tokenized by human annotators. We refer to this method as language-specific tokenization (LST).

More recently, however, natural-language-processing researchers have been experimenting with systems that learn tokenization units by analyzing large bodies of unlabeled data. The obvious advantage of this approach is that it doesn’t depend on lexicons or manually tokenized corpora, which have to be created independently for every language or domain in which we want to apply the tokenizer.

Tokenization.png
Language-independent tokenization (LIT) systems, which are trained without the benefit of manually compiled lexicons, sometimes learn illogical word breaks (k/id, to/ys) that language-specific tokenization (LST) systems avoid (kid, toy/s). But when LIT tokens are embedded, or converted into fixed-length vectors, they still prove useful for search tasks that match texts according to semantic content.
Credit: Glynis Condon

Moreover, because we do not rely on a precompiled, fixed dictionary, we have a better chance of accurately tokenizing words that the tokenizer has never seen before. We refer to this method as language-independent tokenization (LIT).

LIT has had some success in applications such as machine translation systems, which often have restricted vocabularies for reasons of processing speed. However, the relative benefits of LST and LIT in broader natural-language-processing (NLP) applications remains unclear.

In a paper accepted to the Language Resources and Evaluation Conference, which was to be held last week, we compare LST and LIT methods across eight languages (English, German, Spanish, Farsi, Italian, Japanese, Turkish, and Thai), with varying vocabulary sizes.

We find that while LST still tends to work better at larger vocabulary sizes, LIT is competitive — and in some languages, superior — at small (e.g., less than 50,000 subwords) vocabulary sizes. This suggests that LIT is a viable option for applications with limited vocabularies or for languages where well-organized lexical data is not readily available.

Semantic similarity

In our experiments, we tokenized the corpus for each language using both LIT and LST methods and learned subword embeddings over the tokenized corpora. An embedding is a representation of a string of text as a fixed-length vector — a point in a multidimensional space — such that embeddings of related words or phrases are close to each other in the space. Embeddings thus capture something of the text strings’ semantic content. To learn subword embeddings, we used the global vector prediction (GloVe) method.

Next, we created word embeddings from the subword embeddings in three different ways: unweighted averaging; weighted averaging; and smoothed-inverse-frequency-based (SIF-based) weighting, which has previously been proposed for creating sentence embeddings from word embeddings.

We then measured the semantic similarity between two words as the cosine similarity between the corresponding word embeddings. Finally, we computed the correlation between the predicted similarity scores and similarity ratings provided by human annotators for the same word-pairs. A high degree of correlation would indicate that tokenization preserves words’ semantic information, which is desirable to any downstream NLP applications that relie on the tokenization.

For LIT, we used two different approaches to tokenization. One is based on byte pair encoding (BPE), which was originally a data compression technique. BPE scours training texts for the most common symbol pair (in English, for instance, er is extremely common), which it represents using a single symbol. Then it repeats the process, continually adding new symbols that stand for longer and longer strings, up to some predefined limit.

The other approach is based on unigram language models (LMs). This approach begins with a repertory of individual symbols and common substrings, and it begins to assemble them into new substrings according to their frequency of occurrence in some corpus. Again, the process ends when the number of substrings reaches a predefined limit.

Variable vocabularies

We trained each of our three tokenization systems on different-sized subsets of the vocabularies for all eight languages. The LST models were trained on vocabularies ranging in size from 50,000 to 10 million words. The LM models were trained on vocabularies ranging from 20,000 to a million words.

Training BPE models is extremely time consuming, so the largest subsets we could use had 100,000 words. The smallest had 20,000.

In our experiments, we found that an LST tokenizer trained on a vocabulary of a million words or more generally offered the best performance. But there were three exceptions.

One was German, where the LM model based on a million-word vocabulary performed best. The other two were Farsi and Turkish, where, remarkably, the BPE models trained on 100,000 and 50,000 words, respectively, performed best. We suspect that this is because all three languages are highly “agglutinative”: that is, they can accommodate ad hoc or infrequent compounds that won’t show up in standard lexicons.

In general, however, at vocabularies of 100,000 words or fewer, both LIT models outperformed the LST model across the board. This suggests that for under-resourced languages or applications that rely on limited vocabularies, LIT may be an attractive alternative to LST.

Related content

LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist; to support the development and implementation of Generative AI (GenAI) algorithms and models for supervised fine-tuning, and advance the state of the art with Large Language Models (LLMs), As an Applied Scientist, you will play a critical role in supporting the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Sr. Applied Scientists with Recommender System or Search Ranking or Ads Ranking experience to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Recommendation/Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Recommendation/Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA Amazon is seeking an innovative and high-judgement Senior Applied Scientist to join the Privacy Engineering team in the Amazon Privacy Services org. We own products and programs that deliver technical innovation for ensuring compliance with high-impact, urgent regulation across Amazon services worldwide. The Senior Applied Scientist will contribute to the strategic direction for Amazon’s privacy practices while building/owning the compliance approach for individual regulations such as General Data Protection Regulation (GDPR), DMA, Quebec 25 etc. This will require helping to frame, and participating in, high judgment debates and decision making across senior business, technology, legal, and public policy leaders. A great candidate will have a unique combination of experience with innovative data governance technology, high judgement in system architecture decisions and ability to set detailed technical design from ambiguous compliance requirements. You will drive foundational, cross-service decisions, set technical requirements, oversee technical design, and have end to end accountability for delivering technical changes across dozens of different systems. You will have high engagement with WW senior leadership via quarterly reviews, annual organizational planning, and s-team goal updates. Key job responsibilities * Develop information retrieval benchmarks related to code analysis and invent algorithms to optimize identification of privacy requirements and controls. * Develop semantic and syntactic code analysis tools to assess privacy implementations within application code, and automatic code replacement tools to enhance privacy implementations. * Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence for privacy compliance. * Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. A day in the life Amazon Privacy Services own products and programs that deliver technical innovation for ensuring Privacy Amazon services worldwide. We are hiring an innovative and high-judgement Senior Applied Scientist to develop AI solutions for builders across Amazon’s consumer and digital businesses including but not limited to Amazon.com, Amazon Ads, Amazon Go, Prime Video, Devices and more. Our ideal candidate is creative, has excellent problem-solving skills, a solid understanding of computer science fundamentals, deep learning and a customer-focused mindset. The Senior Scientist will serve as the resident expert on the development of AI agents for privacy. They build on their experiences to develop LLMs to develop AI implementations across privacy workflows. They will have responsibilities to mentor junior scientists and engineers develop AI skills. About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! We are seeking a highly skilled Navigation Scientist to help develop advanced algorithms and software for our Prime Air delivery drone program. In this role, you will conduct comprehensive navigation analysis to support cross-functional decision-making, define system architecture and requirements, contribute to the development of flight algorithms, and actively identify innovative technological opportunities that will drive significant enhancements to meet our customers' evolving demands. Export Control License: This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf.