Skip to content

adammeghji/nats-spike

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nats-spike

I was curious to see if a NATS queue could be used to dispatch API requests across different backends. This would allow you to have 1 thin API layer which proxies requests into a queue, which would be consumed, handled, and replied by individual services agnostic of their language, framework, or implementation decisions.

In this example, we have:

  1. a central NATS server using gnatsd
  2. a thin HTTP API exposed via port 3000 that responds to GET /, and queues Protobuf messages
  3. 2 x NATS consumers (1 x NodeJS, 1 x Ruby) which await messages, decodes them via Protobuf, does the work, and responds with a Protobuf message

Protobuf

Protobuf is used to define the shared models and structures used throughout all microservices. The types can be found in proto/*.proto.

Language-specific stubs and bindings can be found below, i.e. proto/ruby/user_pb.rb. These are generated by the Docker application found in proto/. Any changes to the .proto files will require regeneration of the stubs by the proto docker-compose service. This would also need to happen upon deployment, and the resulting stubs would need to be bundled in to the microservices on their individual deployments as well.

  • User is a simple User model, with id, firstName, lastName, and email fields.
  • CreateUserRequest is a Request structure, which passes a User model for signup purposes.
  • CreateUserResponse is a Response structure, which indicates whether the creation was successful, and if so, the user's new ID.

Benefits

Some advantages:

  • Encourages flexibility with which service is consuming from the queues. In this example, both NodeJS and Ruby consumers are handling the same request. If we ever wanted to refactor this endpoint, or replace it with a more performant solution, you could have it consume from the queue simultaneously as the others, without needing to do a hard cutover for all requests hitting that route.
  • 1 central NATS cluster replaces the need for N internal services to have their own load balancers
  • Inter Service Communication is handled via strongly-typed Protobuf messages (i.e. CreateUserRequest, CreateUserResponse, and a User model)
  • Protobuf definitions are centralized, so there's a canonical definition of the domain, along with code generated stubs and bindings for any language that may consume them. You don't need to maintain a representation of your data model in every microservice, or worry about integration issues should one service fall behind.

Requirements

  1. Docker
  2. Docker Compose

Usage

  1. make down to tear down the stack 2A. make up to start up the stack 2B. make down up to force-start the stack
  2. make test to issue a simple POST request to the REST API
  3. make load-test to issue an onslaught of POST requests

Architecture

Architecture

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published