What is BLAZE?

BLAZE is an open-source project for creating customizable and composable Natural Language Understanding (NLU) inference pipelines. The vision of BLAZE is to make it easier to create/benchmark/compare various components of the NLU inference pipeline – thereby making it easier to integrate NLU use cases in existing applications and democratizing the usage of NLU.

* Note that the UI is under development and will be released later in the open-source repository


Natural Language Understanding (NLU) inference pipelines reuse many of the same components arranged in different orders. The purpose each NLU pipeline component serves varies from use-case to use-case. However, these components are not standardized in terms of their inputs, outputs, and hardware requirements. As a result, it is very difficult to interchange and combine the components, especially without introducing significant amounts of code. Their lack of flexibility makes it difficult to compose, modify, and add functionality.

To solve this problem, Blaze that allows for the modular creation and composition of NLU pipelines. Each component of the Pipeline can be implemented as "building block" (for example, a microservice). These building blocks will have standardized inputs and outputs, and they can easily be assembled in varying orders. The order and choice of these specific blocks result in varying pipelines, built for unique use-cases.

Blaze, also allows the developers to benchmark the pipelines against standard datasets and compare the performance of two different pipelines.

Getting Started

This document is a good starting point. Users can set it up in a MacOS or Linux (e.g., Ubuntu 20.04). For other OS platforms, a Linux VM can be used.


We welcome feedback, questions, and issue reports.


We welcome contributions. When you consider contributing to this project, one way to start off is to run and test examples by following the instructions in here. Then, improving documentation would be a great task to start your contribution. Contributor guideline is found in here.