Documentation

Sample Application Overview

Simon Prickett
Author
Simon Prickett, Principal Developer Advocate at Redis

In this course, we'll look at how to use Redis as a data store and cache in the context of a sample application. Imagine that we're building a sort of social network application where users can "check in" at different locations and give them a star rating… from 0 for an awful experience through 5 to report that they had the best time ever there!

When designing our application, we determined that there's a need to manage data about three main entities:

  • Users
  • Locations
  • Checkins

Let's look at what we're storing about each of these entities. As we're using Redis as our only data store, we'll also consider how they map to Redis data types...

Users#

We'll represent each user as a flat map of name/value pairs with no nested objects. As we'll see later on, this maps nicely to a Redis Hash. Here's a JSON representation of the schema we'll use to represent each user:

{
  "id": 99,
  "firstName": "Isabella",
  "lastName": "Pedersen",
  "email": "isabella.pedersen@example.com",
  "password": "xxxxxx1",
  "numCheckins": 8073,
  "lastCheckin": 1544372326893,
  "lastSeenAt": 138
}

We've given each user an ID and we're storing basic information about them. Also, we’ll encrypt their password using bcrypt when we load the sample data into Redis.

For each user, we'll keep track of the total number of checkins that they've submitted to the system, and the timestamp and location ID of their most recent checkin so that we know where and when they last used the system.

Locations#

For each location that users can check in at, we're going to maintain two types of data. The first of these is also a flat map of name/value pairs, containing summary information about the location:

{
  "id": 138,
  "name": "Stacey's Country Bakehouse",
  "category": "restaurant",
  "location": "-122.195447,37.774636",
  "numCheckins": 170,
  "numStars": 724,
  "averageStars": 4
}

We've given each location an ID and a category—we'll use the category to search for locations by type later on. The "location" field stores the coordinates in longitude, latitude format… this is the opposite from the usual latitude, longitude format. We'll see how to use this to perform geospatial searches later when we look at Redis Search.

For each location, we're also storing the total number of checkins that have been recorded there by all of our users, the total number of stars that those checkins gave the location, and an average star rating per checkin for the location.

The second type of data that we want to maintain for each location is what we'll call "location details". These take the form of more structured JSON documents with nested objects and arrays. Here's an example for location 138, Stacey's Country Bakehouse:

{
  "id": 138,
  "hours": [
    { "day": "Monday", "hours": "8-7" },
    { "day": "Tuesday", "hours": "9-7" },
    { "day": "Wednesday", "hours": "6-8" },
    { "day": "Thursday", "hours": "6-6" },
    { "day": "Friday", "hours": "9-5" },
    { "day": "Saturday", "hours": "8-9" },
    { "day": "Sunday", "hours": "7-7" }
  ],
  "socials": [
    {
      "instagram": "staceyscountrybakehouse",
      "facebook": "staceyscountrybakehouse",
      "twitter": "staceyscountrybakehouse"
    }
  ],
  "website": "www.staceyscountrybakehouse.com",
  "description": "Lorem ipsum....",
  "phone": "(316) 157-8620"
}

We want to build an API that allows us to retrieve all or some of these extra details, and keep the overall structure of the document intact. For that, we'll need Redis with JSON support as we'll see later.

Checkins#

Checkins differ from users and locations in that they're not entities that we need to store forever. In our application, checkins consist of a user ID, a location ID, a star rating and a timestamp - we'll use these values to update attributes of our users and locations.

Each checkin can be thought of as a flat map of name/value pairs, for example:

{
  "userId": 789,
  "locationId": 171,
  "starRating": 5
}

Here, we see that user 789 visited location 171 ("Hair by Parvinder") and was really impressed with the service.

We need a way to store checkins for long enough to process them, but not forever. We also need to associate a timestamp with each one, as we'll need that when we process the data.

Redis provides a Stream data type that's perfect for this - with Redis Streams, we can store maps of name/value pairs and have the Redis server timestamp them for us. Streams are also perfect for the sort of asynchronous processing we want to do with this data. When a user posts a new checkin to our API we want to store that data and respond to the user that we've received it as quickly as possible. Later we can have one or more other parts of the system do further processing with it. Such processing might include updating the total number of checkins and last seen at fields for a user, or calculating a new average star rating for a location.

Application Architecture#

We decided to use Node.js with the Express framework and ioredis client to build the application. Rather than have a monolithic codebase, the application has been split out into four components or services. These are:

  • Authentication Service: Listens on an HTTP port and handles user authentication using Redis as a shared session store that other services can access.
  • Checkin Receiver: Listens on an HTTP port and receives checkins as HTTP POST requests from our users. Each checkin is placed in a Redis Stream for later processing.
  • Checkin Processor: Monitors the checkin Stream in Redis, updating user and location information as it processes each checkin.
  • API Server: Implements the bulk of the application's API endpoints, including those to retrieve information about users and locations from Redis.

These components fit together like so:

There's also a data loader component, which we'll use to load some initial sample data into the system.

As we progress through the course, we'll look at each of these components in turn. In the next module, you'll get hands on and clone the application repo, start a Redis server with Docker, and load the sample data.

External Resourses#