Paper on graph database schemata wins best-industry-paper award

SIGMOD paper by Amazon researchers and collaborators presents flexible data definition language that enables rapid development of complex graph databases.

Where a standard relational database stores data in linked tables, graph databases store data in graphs, where the edges represent relationships between data items. Graph databases are popular with customers for use cases like single-customer view, fraud detection, recommendations, and security, where you need to create relationships between data and quickly navigate these connections. Amazon Neptune is AWS’s graph database service, which is designed for scalability and availability and allows our customers to query billions of relationships in milliseconds.

Related content
Tim Kraska, who joined Amazon this summer to build the new Learned Systems research group, explains the power of “instance optimization”.

In this blog post, we present joint work on a schema language for graph databases, which was carried out under the umbrella of the Linked Data Benchmarking Council (LDBC), a nonprofit organization that brings together leading organizations and academics from the graph database space. A schema is a way of defining the structure of a database — the data types permitted, the possible relationships between them, and the logical constraints upon them (such as uniqueness of entities).

This work is important to customers because it will allow them to describe and define the structures of their graphs in a way that is portable across vendors and makes building graph applications faster. We presented our work in a paper that won the best-industry-paper award at this year’s meeting of the Association for Computing Machinery's Special Interest Group on Management of Data (SIGMOD).

Labeled-property graphs

The labeled-property-graph (LPG) data model is a prominent choice for building graph applications. LPGs build upon three primitives to model graph-shaped data: nodes, edges, and properties. The figure below represents an excerpt from a labeled property graph in a financial-fraud scenario. Nodes are represented as green circles, edges are represented as directed arrows connecting nodes, and properties are enclosed in orange boxes.

The node with identifier 1, for instance, is labeled Customer and carries two properties, specifying the name with string value “Jane Doe” and a customerId. Both node 1 and 2 two are connected to node 3, which represents a shared account with a fixed iban number; the two edges are marked with the label Owns, which specifies the nature of the relationship. Just like vertices, edges can carry properties. In this example, the property since specifies 2021-03-05 as the start date of ownership.

Graph schemata 1.png
Sample graph representing two customers that own a shared account.

Relational vs. graph schema

 One property that differentiates graph databases from, for instance, relational databases — where the schema needs to be defined upfront and is often hard to change — is that graph databases do not require explicit schema definitions. To illustrate the difference, compare the graph data model from the figure above to a comparable relational-database schema, shown below, with the primary-key attributes underlined.

Relational database.png
A possible relational-database model for the scenario above.

Schema-level information of the relational model — tables and attribute names — are represented as part of the data itself in graphs. Said otherwise, by inserting or changing graph elements such as node labels, edge labels, and property names, one can extend or change the schema implicitly, without having to run (oftentimes tedious) schema manipulations such as ALTER TABLE commands.

Related content
Prioritizing predictability over efficiency, adapting data partitioning to traffic, and continuous verification are a few of the principles that help ensure stability, availability, and efficiency.

As an example, in a graph database one can simply add an edge with the previously unseen label Knows to connect the two nodes representing Jane Doe and John Doe or introduce nodes with new labels (such as FinancialTransaction) at any time. Such extensions would require table manipulations in our relational sample schema.

The absence of an explicit schema is a key differentiator that lowers the burden of getting started with data modeling and application building in graphs: following a pay-as-you-go paradigm, graph application developers who build new applications can start out with a small portion of the data and insert new node types, properties, and interconnecting edges as their applications evolve, without having to maintain explicit schemata.

Schemata evolution

While this contributes to the initial velocity of building graph applications, what we often see is that — throughout the life cycle of graph applications — it becomes desirable to shift from implicit to explicit schemata. Once the database has been seeded with an initial (and typically yet-to-be-refined) version of the graph data, there is a demand for what we call flexible-schema support. 

Schema evolution.png
Evolution of schema requirements throughout the graph application life cycle.

In that stage, the schema primarily plays a descriptive role: knowing the most important node/edge labels and their properties tells application developers what to expect in the data and guides them in writing queries. As the application life cycle progresses, the graph data model stabilizes, and developers may benefit from a more rigorous, prescriptive schema approach that strongly asserts shapes and logical invariants in the graph.

PG-Schema

Motivated by these requirements, our SIGMOD publication proposes a data definition language (DDL) called PG-Schema, which aims to expose the full breadth of schema flexibility to users. The figure below shows a visual representation of such a graph schema, as well as the corresponding syntactical representation, as it could be provided by a data architect or application developer to formally define the schema of our fraud graph example.

Graph database schema.png
Schema for the graph data from the graph database above (left: graphical representation; right: corresponding data definition language).

In this example, the overall schema is composed of the six elements enclosed in the top-level GRAPH TYPE definition:

  • The first three lines of the GRAPH TYPE definition introduce so-called node types: person, customer, and account; they describe structural constraints on the nodes in the graph data. The customer node type, for instance, tells us that there can be nodes with label Customer, which carry a property customerId and are derived from a more general person node type. Concretely, this means that nodes with the label Customer inherit the properties name and birthDate defined in node type person. Note that properties also specify a data type (such as string, date, or numerical values) and may be marked as optional.
  • Edge types build upon node types and specify the type and structure of edges that connect nodes. Our example defines a single edge type connecting nodes of node type customer with nodes of type account. Informally speaking, this tells us that Customer-labeled nodes in our data graph can be connected to Account-labeled nodes via an edge labeled Owns, which is annotated with a property since, pointing to a date value.
  • The last two lines specify additional constraints that go beyond the mere structure of our graph. The KEY constraint demands that the value of the iban property uniquely identifies an account, i.e., no two Account-labeled nodes can share the same IBAN number. This can be thought of as the equivalent of primary keys in relational databases, which enforce the uniqueness of one or more attributes within the scope of a given table. The second constraint enforces that every account has at least one owner, which is reminiscent of a foreign-key constraint in relational databases.

Also note the keyword STRICT in the graph type definition: it enforces that all elements in the graph obey one of the types defined in the graph type body, and that all constraints are satisfied. Concretely, it implies that our graph can contain onlyPerson-, Customer-, and Account-labeled nodes with the respective sets of properties that the only possible edge type is between customers and accounts with label Owns and that the key and foreign constraints must be satisfied. Hence, the STRICT keyword can be understood as a mechanism to implement the schema-first paradigm, as it is maximally prescriptive and strongly constrains the graph structure.

Related content
Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

To account for flexible- and partial-schema use cases, PG-Schema offers a LOOSE keyword as an alternative to STRICT, which comes with a more relaxed interpretation: graph types that are defined as LOOSE allow for node and edge types that are not explicitly listed in the graph type definition. Mechanisms similar to STRICT vs. LOOSE keywords at graph type level can be found at different levels of the language.

For instance, keywords such as OPEN (vs. the implicit default, CLOSED) can be used to either partially or fully specify the set of properties that can be carried by vertices with a given vertex label (e.g., expressing that a Person-labeled node must have a name but may have an arbitrary set of other (unknown) properties, without requiring enumeration of the entire set). The flexibility arising from these mechanisms makes it easy to define partial schemata that can be adjusted and refined incrementally, to capture the schema evolution requirements sketched above.

Not only does PG-Schema provide a concrete proposal for a graph schema and constraint language, but it also aims to raise awareness of the importance of a standardized approach to graph schemata. The concepts and ideas in the paper were codeveloped by major companies and academics in the graph space, and there are ongoing initiatives within the LDBC that aim toward a standardization of these concepts.

In particular, the LDBC has close ties with the ISO committee that is currently in the process of standardizing a new graph query language (GQL). As some GQL ISO committee members are coauthors of the PG-Schema paper, there has been a continuous bilateral exchange, and it is anticipated that future versions of the GQL standard will include a rich DDL, which may pick up concepts and ideas presented in the paper.

Research areas

Related content

US, WA, Seattle
The Amazon Economics Team is hiring Economist Interns. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets to solve real-world business problems. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with future job market placement. Roughly 85% of interns from previous cohorts have converted to full-time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, Cambridge
Our team undertakes research together with multiple organizations to advance the state-of-the-art in speech technologies. We not only work on giving Alexa, the ground-breaking service that powers Echo, her voice, but we also develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video. Do you want to be part of the team developing the latest technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Senior Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language and Video technology. As a Senior Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and modelling techniques to drive the state of the art in speech and vocal arts synthesis. Position Responsibilities: - Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. - Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. - Research and implement novel ML and statistical approaches to add value to the business. - Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, VA, Arlington
The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We are looking for economists who are able to apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. We are looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. Key job responsibilities Use reduced-form causal analysis and/or structural economic modeling methods to evaluate the impact of policies on employee outcomes, and examine how external labor market and economic conditions impact Amazon's ability to hire and retain talent. A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team We are a multidisciplinary team that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, WA, Seattle
We are expanding our Global Risk Management & Claims team and insurance program support for Amazon’s growing risk portfolio. This role will partner with our risk managers to develop pricing models, determine rate adequacy, build underwriting and claims dashboards, estimate reserves, and provide other analytical support for financially prudent decision making. As a member of the Global Risk Management team, this role will provide actuarial support for Amazon’s worldwide operation. Key job responsibilities ● Collaborate with risk management and claims team to identify insurance gaps, propose solutions, and measure impacts insurance brings to the business ● Develop pricing mechanisms for new and existing insurance programs utilizing actuarial skills and training in innovative ways ● Build actuarial forecasts and analyses for businesses under rapid growth, including trend studies, loss distribution analysis, ILF development, and industry benchmarks ● Design actual vs expected and other metrics dashboards to assist decision makings in pricing analysis ● Create processes to monitor loss cost and trends ● Propose and implement loss prevention initiatives with impact on insurance pricing in mind ● Advise underwriting decisions with analysis on driver risk profile ● Support insurance cost budgeting activities ● Collaborate with external vendors and other internal analytics teams to extract insurance insight ● Conduct other ad hoc pricing analyses and risk modeling as needed We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | New York, NY, USA | Seattle, WA, USA
US, NY, New York
The Amazon SCOT Forecasting team seeks a Senior Applied Scientist to join our team. Our research team conducts research into the theory and application of reinforcement learning. This research is shared in top journals and conferences and has a significant impact on the field. Through our launch of several Deep RL models into production, our work also affects decision making in the real world. Members of our group have varied interests—from the mathematical foundations of reinforcement learning, to language modeling, to maintaining the performance of generative models in the face of copyrights, and more. Recent work has focused on sample efficiency of RL algorithms, treatment effect estimation, and RL agents integrating real-world constraints, as applied in supply chains. Previous publications include: - Linear Reinforcement Learning with Ball Structure Action Space - Meta-Analysis of Randomized Experiments with Applications to Heavy-Tailed Response Data - A Few Expert Queries Suffices for Sample-Efficient RL with Resets and Linear Value Approximation - Deep Inventory Management - What are the Statistical Limits of Offline RL with Linear Function Approximation? Working collaboratively with a group of fellow scientists and engineers, you will identify complex problems and develop solutions in the RL space. We encourage collaboration across teammates and their areas of specialty, leading to creative and ambitious projects with the goal of publication and production. Key job responsibilities - Drive collaborative research and creative problem solving - Constructively critique peer research; mentor junior scientists - Create experiments and prototype implementations of new algorithms and techniques - Collaborate with engineering teams to design and implement software built on these new algorithms - Contribute to progress of the Amazon and broader research communities by producing publications We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, CA, Virtual Location - California
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate and grow their personal interests and passions. We're always live at Twitch. About the Role: As a Data Scientist, Analytics member of the Data Platform - Insights team, you'll provide data analysis and support for platform, service, and operational engineering teams at Twitch, shaping the way success is measured. Defining what questions should be asked and scaling analytics methods and tools to support our growing business. Additionally, you will help support the vision for business analytics, solutions architecture for data related business constructs, as well as tactical execution such as experiment analysis and campaign performance reporting. You are paving the way for high-quality, high-velocity decisions and will report to the Manager, Data Science. For this role, we're looking for an experienced data staff who will oversee data instrumentation, dashboard/report building, metrics reviews, inform team investments, guidance on success/failure metrics and ad-hoc analysis. You will also work with technical and non-technical staff members throughout the company, and your effort will have an impact on hundreds of partners at Twitch You Will: - Work with members of Platforms & Services to guide them towards better decision making from the available data. - Promote data knowledge and insights through managing communications with partners and other teams, collaborate with colleagues to complete data projects and ensure all parties can use the insights to further improve. - Maintain a customer-centric focus while being a domain and product expert through data, develop trust amongst peers, and ensure that the teams and programs have access to data to make decisions - Manage ambiguous problems and adapt tools to answer complicated questions. - Identify the trade-offs between speed and quality of different approaches. - Create analytical frameworks to measure team success by partnering with teams to establish success metrics, create approaches to track the data and troubleshoot errors, measure and evaluate the data to develop a common language for all colleagues to understand these metrics. - Operationalize data processes to provide partners with ad-hoc analysis, automated dashboards, and self-service reporting tools so that everyone gets a good sense of the state of the business Perks: - Medical, Dental, Vision & Disability Insurance - 401(k), Maternity & Parental Leave - Flexible PTO - Commuter Benefits - Amazon Employee Discount - Monthly Contribution & Discounts for Wellness Related Activities & Programs (e.g., gym memberships, off-site massages), -Breakfast, Lunch & Dinner Served Daily - Free Snacks & Beverages We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | Seattle, WA, USA | Virtual Location - CA
US, WA, Bellevue
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? Have you also wondered what are different ways that the transportation assets can be used to delight the customer even more. If so, the Amazon transportation Services, Product and Science is for you . We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed Applied Scientist with strong scientific thinking, good software and statistics experience, skills to help manage projects and operations, improve metrics, and develop scalable processes and tools. The primary role of an Applied Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how we operate the middle mile network. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, machine learning , and the ability to use data and research to make changes. This role requires robust skills in research and implementation of scalable products and models . This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, CA, Los Angeles
The Alexa team is looking for a passionate, talented, and inventive Applied Scientist with a strong machine learning background, to help build industry-leading Speech and Language technology. Key job responsibilities As an Applied Scientist with the Alexa team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art in spoken language understanding. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The Alexa team has a mission to push the envelope in Automatic Speech Recognition (ASR), Natural Language Understanding (NLU), and Audio Signal Processing, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Los Angeles, CA, USA
US, WA, Seattle
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the e-commerce space? If so, Amazon's International Seller Services team has an exciting opportunity for you as an Applied Scientist. At Amazon, we strive to be Earth's most customer-centric company, where customers can find and discover anything they want to buy online. Our International Seller Services team plays a pivotal role in expanding the reach of our marketplace to sellers worldwide, ensuring customers have access to a vast selection of products. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences for our customers and sellers. You will be part of a global team that is focused on acquiring new merchants from around the world to sell on Amazon’s global marketplaces around the world. The position is based in Seattle but will interact with global leaders and teams in Europe, Japan, China, Australia, and other regions. Join us at the Central Science Team of Amazon's International Seller Services and become part of a global team that is redefining the future of e-commerce. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way sellers engage with our platform and customers worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Please visit https://www.amazon.science for more information Key job responsibilities - Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the international seller services domain. - Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. - Continuously explore and evaluate state-of-the-art NLP techniques and methodologies to improve the accuracy and efficiency of language-related systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA