Recently, there has been a great interest in ST 2110 technology in the video production environment. Video production professionals understand the possible advantages and want to be able to transmit large volumes of information while having limited time resources. We are aiming to help them.
The SMPTE ST 2110 suite of standards, approved by SMPTE, was developed in 2017 after the ST 2022 suite of standards. It is worth noting that both of these standards were supposed to implement the desire of video engineers to make video signal transfer simpler and cheaper. After all, now that video data is completely digital, why not use a simpler transmission method. However, when it appeared, the use of ST 2110 was hindered by being economically unaffordable for most video productions.
Now, thanks to the developments in electronics and software, as well as the work of Blackmagic Design company, ST 2110 can become a transmission technology breakthrough for video production, accessible to a wide audience.
Methods of media transmission
The first type of cable used for transmitting video signals was the coaxial cable.
Then, in the 1980s and 1990s, composite video cables, SDI (Serial Digital Interface) cables, S-video cables, and component video cables were developed.
In the late 1990s and early 2000s, there was a gradual transition to digital media data transmission, and the inclusion of additional information in SDI (transmission of multiplexed signals). At the same time, the transmission was carried out over a single coaxial cable.
- Details
- Written by: Igor Vitiorets, CTO at slomo.tv
Slomo.tv servers traditionally work with embedded sound. This is because in SDI the audio is transmitted together with the video signal, thus providing synchronization of audio and video. Since audio synchronization is performed by specialized equipment external to slomo.tv servers, it is the responsibility of the sound engineer to ensure high audio quality and monitor the audio channels. The slomo.tv server only records the audio input or plays it back. This workflow significantly reduces the number of required connections and cables.
In case of non-embedded audio, namely analog, AES/EBU, MADI or DANTE® signals, the audio may be digitized at a frequency that is not synchronous with the video. As a result, the number of samples of incoming audio per frame may be higher or lower than required by the standard. To normalize the number of samples would require additional processing, which could result in a loss of quality and, most importantly, would shift the responsibility for audio quality to the server.
Since multichannel audio is actively used in the video industry, slomo.tv has to provide the ability to input and output such audio. This is done by special subsystems, allowing embedding the required number of audio signals into any SDI video channel, as well as de-embedding them. These “digital glue” solutions are usually used in studios and OB Vans for getting the required result quickly and with a minimum number of connections.
For working with different audio sources and formats we offer several options. All of these involve the use of embedders for embedding analog or digital audio into an SDI signal.
- Details
- Written by: Igor Vitiorets, CTO at slomo.tv
Working with video signals implies monitoring the video images. There are many reasons for this: from checking the connection and signal quality to monitoring events in the video.
When dealing with only one signal, everything is quite simple: a video monitor is installed, which displays the input video signal. It is much more complicated when there are 6, 8, 16 or even more video signals. Installation of a dedicated monitor for each signal becomes impractical and requires a lot of space. A device called Multiviewer solves this problem.
Multiviewer for multiple channels
The Multiviewer receives video signals, processes, scales and arranges them and outputs them to one or more video monitors, depending on which Multiviewer is used.
Multiviewer is a must-have for systems with multichannel recording functionality. There are at least 3 types of such systems: recording for NLE servers, video replay servers, and Video Assistant Referee (VAR) systems.
Simple systems often use an external Multiviewer to display input video signals, but all professional systems, including the EVS, GVG and slomo.tv servers, have a built-in Multiviewer subsystem. The slomo.tv servers even have two completely independent Multiviewers: one integrated into the main interface and one running on a separate monitor.
The signals of multichannel recording systems can be divided into two types: live and recorded. The Multiviewer of such systems must process both types of signals.
A standard replay server records live signals, plays back one channel (Program), and allows an operator to work with one recorded channel for search and markup of clips (Preview). Thus, a standard replay Multiviewer displays all live signals, Program and Preview.
- Details
- Written by: Igor Vitiorets, CTO at slomo.tv
The Video assisted review technology implementation (VAR) allowed the referees to review controversial moments for making the right decisions. The monitor used to review video clips is one of the key elements of the VAR technology and the point of contact with the VAR team.
Depending on the sport and its specifics, there are different options for replay review organization. In some cases, the referee can watch the replay at the VAR team workplace. If this is not possible or for more comfortable work a Referee Review Area (RRA) with a dedicated monitor is set up on the edge of the playing area.
How is the video delivered to it? This can be done by duplicating the VAR monitor using an HDMI splitter. However, the organization of the Referee Review Area is not as simple as it may seem at first glance.
- Details
- Written by: Igor Vitiorets, CTO at slomo.tv
Video Assistant Referee technology or Video Assisted Review (the name introduced by FINA and, in our opinion, an excellent description of the video-refereeing process in any sport) is rapidly being implemented in all sports, including those with limited budgets. Therefore, the development of affordable VAR systems is an important modern trend.
VAR is a replay system with specialized functionality. The software defined server architecture becomes virtually the only option for VAR systems. In such servers almost all operations are performed using the CPU.
One of the key operations is the encoding and decoding of uncompressed incoming video. This process is performed by a part of the software, called the video codec. The video codec is defined by 2 parameters: the compression standard and the way compression is realized.
Let's talk about the main points and principles, based on which a codec for VAR systems should be selected. In order to choose the "right codec" it is necessary to establish the evaluation criteria that allow to make at least an initial selection among the codecs.
- Details
- Written by: Igor Vitiorets, CTO

