Skip to main content Skip to search
Katie Tregurtha
Posted by
Katie Tregurtha on
September 23, 2020

The IT Infrastructure Powering the New Live Event

Prior to the COVID-19 pandemic, companies running large-scale events were experimenting with new ways to engage with their audience. Conferences were embracing various applications to engage with virtual attendees, sporting events were increasingly streamed on various devices and platforms, and in-person events often used applications as a way to communicate with onsite attendees.

Unsurprisingly, the pandemic and resulting social distancing requirements greatly accelerated the adoption of these technologies to make the switch from in-person to virtual. However, running a 100% virtual event these days (even if you’re accustomed to live-streaming events), may require some further investment in your IT infrastructure.

In a session for our recently held INS1GHTS2020 conference, NS1 Principal Engineer James Royalty provided some best practices on successfully live-streaming events based upon our own recent work with customers.

The Requirements

Earlier this year, we worked with a customer to manage the live-streaming portion of a large-scale sporting event. The event was scheduled to last for about 4 hours, and the customer expected a few million concurrent users accessing the live stream at any one point. The event was also attended in-person and broadcast on television, so there was no room for lag or downtime with the streams.

User traffic was expected to come predominantly from North America, though the event was streamed globally. To account for this, the customer had a multi-CDN setup of 5 CDNs with various commits.

Specific technical requirements included:

  • Cloud co-located HTTP
  • Decision making of which CDN to send the user to at start of the stream
  • 50ms budget for HTTP-based decisions

The Solution

Automated, intelligent traffic steering was the key way to meet these requirements, so we created a solution for the customer using Pulsar. We embedded a Pulsar javascript within the primary streaming webpage, as well as a secondary webpage that was also expected to be highly trafficked.

The javascript measured the equivalent of one HD video segment. That data was fed into our decision engine, the Filter Chain. The Filter Chain is highly customizable, but in this instance we set some of the following parameters:

  • “Kill switch” to immediately avoid any CDNs that were down or experiencing issues
  • Weighted shuffle - assign a weight to each CDN to split user traffic (i.e., send 30% to one CDN, 40% to another, and so on)

We also utilized our Pulsar Stabilize functionality, which eliminates poor performing CDNs based upon a preset threshold. It essentially works as an “insurance policy” to the other filters, as it can “edit” other filters to drive traffic based upon real-time conditions. This prevents flapping and keeps all viable options open.

Afterwards, the customer was able to use the data collected by Pulsar to refine their strategy for future events. Over the course of the event, we saw approximately 60 million total requests, with an average HTTP response time of 4ms at 99th percentile.

To learn more about managing multi-CDN during a live event and key performance results for this customer, check out the recording of this presentation from INS1GHTS 2020

Improving the live event experience beyond COVID-19

The changes made during the pandemic to enable a semblance of normalcy, such as livestreaming previously in-person events, will have a lasting impact. Going forward, many events may be permanently “hybrid”, with some attendees onsite and others streaming. While initially more complicated to manage than the traditional live event, the long term ROI is reaching a much larger audience than previously possible.


Interested in a deeper dive into managing livestream events?

Check out our guide, How to Successfully Deliver a Large-Scale Livestream Event