AWS FOR REAL-TIME DATA APPLICATIONS

Amazon Web Services (AWS) includes a variety of different services that facilitate the capturing, processing and storage of real-time data. Tasked with delivering NHL Puck and Player Tracking data in a unique and insightful way, the Digital Labs team utilized several of these services to create a mobile app that delivers real-time data with as little lag and processing time as possible, all while keeping the overhead cost at a minimal. This two-part series recounts the steps we took researching, and implementing our tech stack, and how we optimized the services we used to be powerful but cost-effective.

PART 1: ENGINEERING AWS FOR REAL-TIME STREAMING

Earlier this year, the NHL introduced the “Puck and Player Tracking” system, a technology that tracks the movement of the puck and each individual player during a game. The system uses sensors located in the player’s jerseys and in the pucks, as well as sensors and optical tracking set up in the arena, to capture the data. The result is multiple coordinates per second generated for each entity, yielding a multitude of data.

The NHL tested the system during the 2020 All-Star Game in St. Louis, and was planning to officially launch it during the playoffs. The data captured using this technology will eventually be available to access in real-time, which provides a variety of opportunities to enhance fan engagement and offer unique insights into the game. Digital Labs was given the opportunity to work with the test data captured during the All-Star Game.

Displaying Real-Time Data in “Real-Time”

One of the exciting properties of streaming data is that you can use it to display things like changing speed and acceleration “live” to fans. Coordinates captured at an extremely high frequency rate can be used to create a very accurate movement and game flow visualization. The data can also be processed to provide unique statistical insights into the game, such as time spend in offense or defense zones. We started planning to include all these features (and more!) in a fan-focused mobile app.

Our main challenge in configuring the data pipeline for this app was avoiding processing time or ‘lag’ while passing data to the app. For this experience to be appealing, data must flow from the capturing system to the user endpoint nearly instantaneously. The appeal of real-time data is feeling like you are “right there in the action”. Seeing something off – like the speed of a player continue to increase after the player has already left the ice – would ruin the experience. We knew we needed to do some research to determine which services would provide the most seamless user experience.

Our Original AWS Pipeline

Amazon Kinesis

First, we explored how to manage the data received from the tracking system. The tracking data was provided in a streaming loop to simulate real-time, and we wrote a python program to ingest it using a WebSocket connection for minimal broadcast delay. We investigated Amazon Kinesis for processing the incoming data. Kinesis is a service with the ability to ingest large amounts of data, process it, and provide real-time analytics. Kinesis can also be used in tandem with other services, such as Kinesis Data Firehose and AWS Lambda, to capture data and write it to general storage or a database.

Lambda and DynamoDB

Next, we explored how to process and store the data from Kinesis with as little delay as possible. Our Amazon support team suggested using a Lambda function. AWS Lambda is a compute service that can execute code based on a trigger (in our case, incoming data from Kinesis). It is a lightweight, event-driven alternative to the continuously running virtual machine, and its scalability makes it useful for quick processing of streaming data. We set up a Lambda function to sort the streaming data by category and write it to the corresponding table in DynamoDB. The data stored in DynamoDB could be accessed by the application using a mobile framework.

AWS AppSync

We investigated which Amazon Web Service would work best to deliver real-time data to our mobile application. We found a few great AWS services that facilitate mobile app development but settled on AWS AppSync because it has features specifically geared towards processing real-time data. AppSync is a service that uses GraphQL to manage and deliver data to applications. It is easy to set up using the AWS AppSync management console and can intuitively generate your API and client-side code. It’s ability to handle real-time subscriptions, which is the “automatic” updating of data displayed on the screen, is what appealed to us. This would avoid users needing to refresh to get updates. We now had a functioning delivery system.

With the streaming data pipeline in place, the time had come to put our app to the test. The Puck and Player Tracking app worked, but it didn’t appear to update automatically using the AppSync real-time subscriptions (as the documentation suggested it should).

Challenges with AppSync Real-Time Subscriptions

Further exploration led us to the conclusion that the issue was caused by the way AppSync relays database updates to users. Real-time subscriptions were designed to work with a multi-user system, like a chat application. When a user enters a message, the submission is received by the server through AppSync, and stored in a database. With each new entry, a database mutation is triggered, and AppSync automatically generates a refresh of the chat text for all users.

The puck player tracking application operates a little differently. The database only receives data updates directly from the NHL feed, and these updates do not automatically generate a database mutation through AppSync. Without the database mutation calls, AppSync never triggers a refresh through the subscription, and the app users don’t receive the new data.

Our solution was to manually generate a mutation call for each record processed in Lambda, which would pass the real-time data from the NHL feed to the users and prompt a refresh. This was confirmed by the AWS team to be the best practice.

Creating a Prototype on a Deadline

With the app demo date approaching, we took another look at the latency of the real-time data features. Upon further investigation, we noticed that Kinesis seemed to be taking an unusually long time to process the incoming information. Data appeared to get “stuck”, and the delivery lag slowly grew, until the data displayed by the app was hours behind real time.

Given the time constraints on the project, we researched what temporary changes we could make to get our app running at top speed for the upcoming demo. We tried a few different solutions, including increasing the number of Kinesis shards to over 200, and tweaking the Lambda settings to accept more incoming data with each iteration. We also added parallel processing of the incoming data within the python ingestion program, so that incoming data from the Puck and Player tracking source could be distributed to Kinesis more efficiently. These modifications provided satisfactory improvement in latency, and we felt confident in the final configuration and ready to debut our app.

Demo Day

The day before our big Puck and Player Tracking App reveal, the COVID-19 pandemic hit and everything changed. The world of sports ground to a halt and our demo was cancelled. With revenue severely depleted for the foreseeable future, the $800 daily cost for sustaining the infrastructure suddenly became a glaring problem. Having nothing but time in quarantine, we set out to review our AWS infrastructure with the goal of substantially reducing costs. Read about our approach in Part 2 of this series.