Centricular

Expertise, Straight from the Source



Devlog

Read about our latest work!

Up until recently, when using hlscmafsink, if you wanted to move to a new playlist you had to stop the pipeline, modify the relevant properties and then go to PLAYING again.

This was problematic when working with live sources because some data was being lost between the state changes. Not anymore!

A new-playlist signal has been added, which lets you switch output to a new location on the fly, without having any gaps between the content in each playlist.

Simply change the relevant properties first and then emit the signal:

hlscmafsink.set_property("playlist-location", new_playlist_location);
hlscmafsink.set_property("init-location", new_init_location);
hlscmafsink.set_property("location", new_segment_location);
hlscmafsink.emit_by_name::<()>("new-playlist", &[]);

This can be useful if you're capturing a live source and want to switch to a different folder every couple of hours, for example.



What is JPEG XS?

JPEG XS is a visually lossless, low-latency, intra-only video codec for video production workflows, standardised in ISO/IEC 21122.

It's wavelet based, with low computational overhead and a latency measured in scanlines, and it is designed to allow easy implementation in software, GPU or FPGAs.

Multi-generation robustness means repeated decoding and encoding will not introduce unpleasant coding artefacts or noticeably degrade image quality, which makes it suitable for video production workflows.

It is often deployed in lieu of existing raw video workflows, where it allows sending multiple streams over links designed to carry a single raw video transport.

JPEG XS encoding / decoding in GStreamer

GStreamer now gained basic support for this codec.

Encoding and decoding is supported via the Open Source Intel Scalable Video Technology JPEG XS library, but third-party GStreamer plugins that provide GPU accelerated encoding and decoding exist as well.

MPEG-TS container mapping

Support was also added for carriage inside MPEG-TS which should enable a wide range of streaming applications including those based on the Video Services Forum (VSF)'s Technical Recommendation TR-07.

JPEG XS caps in GStreamer

It actually took us a few iterations to come up with GStreamer caps that we were somewhat happy with for starters.

Our starting point was what the SVT encoder/decoder output/consume, and our initial target was MPEG-TS container format support.

We checked various specifications to see how JPEG XS is mapped there and how it could work, in particular:

  • ISO/IEC 21122-3 (Part 3: Transport and container formats)
  • MPEG-TS JPEG XS mapping and VSF TR-07 - Transport of JPEG XS Video in MPEG-2 Transport Stream over IP
  • RFC 9134: RTP Payload Format for ISO/IEC 21122 (JPEG XS)
  • SMPTE ST 2124:2020 (Mapping JPEG XS Codestreams into the MXF)
  • MP4 mapping

and we think the current mapping will work for all of those cases.

Basically each mapping wants some extra headers in addition to the codestream data, for the out-of-band signalling required to make sense of the image data. Originally we thought about putting some form of codec_data header into the caps, but it wouldn't really have made anything easier, and would just have duplicated 99% of the info that's in the video caps already anyway.

The current caps mapping is based on ISO/IEC 21122-3, Annex D, with additional metadata in the caps, which should hopefully work just fine for RTP, MP4, MXF and other mappings in future.

Please give it a spin, and let us know if you have any questions or are interested in additional container mappings such as MP4 or MXF, or RTP payloaders / depayloaders.



When using hlssink3 and hlscmafsink elements, it's now possible to track new fragments being added by listening for the hls-segment-added message:

Got message #67 from element "hlscmafsink0" (element): hls-segment-added, location=(string)segment00000.m4s, running-time=(guint64)0, duration=(guint64)3000000000;
Got message #71 from element "hlscmafsink0" (element): hls-segment-added, location=(string)segment00001.m4s, running-time=(guint64)3000000000, duration=(guint64)3000000000;
Got message #74 from element "hlscmafsink0" (element): hls-segment-added, location=(string)segment00002.m4s, running-time=(guint64)6000000000, duration=(guint64)3000000000;

This is similar to how you would listen for splitmuxsink-fragment-closed when using the older hlssink2.



webrtcsink already supported instantiating a data channel for the sole purpose of carrying navigation events from the consumer to the producer, it can also now create a generic control data channel through which the consumer can send JSON requests in the form:

{
    "id": identifier used in the response message,
    "mid": optional media identifier the request applies to,
    "request": {
        "type": currently "navigationEvent" and "customUpstreamEvent" are supported,
        "type-specific-field": ...
    }
}

The producer will reply with such messages:

{
  "id": identifier of the request,
  "error": optional error message, successful if not set
}

The example frontend was also updated with a text area for sending any arbitrary request.

The use case for this work was to make it possible for a consumer to control the mix matrix used for the audio stream, with such a pipeline running on the producer side:

gst-launch-1.0 audiotestsrc ! audioconvert ! webrtcsink enable-control-data-channel=true

As audioconvert now supports setting a mix matrix through a custom upstream event, the consumer can simply input the following text in the request field of the frontend to reverse the channels of a stereo audio stream:

{
  "type": "customUpstreamEvent",
  "structureName": "GstRequestAudioMixMatrix",
  "structure": {
    "matrix": [[0.0, 1.0], [1.0, 0.0]]
  }
}


GStreamer's VideoToolbox encoder recently gained support for encoding HEVC/H.265 videos containing an alpha channel.

A separate vtenc_h265a element has been added for this purpose. Assuming you're on macOS, you can use it like this:

gst-launch-1.0 -e videotestsrc ! alpha alpha=0.5 ! videoconvert ! vtenc_h265a ! mp4mux ! filesink location=alpha.mp4

Click here to see an example in action! It should work fine on macOS and iOS, in both Chrome and Safari. On other platforms it might not be displayed at all - compatibility is unfortunately still quite limited.

If your browser supports this format correctly, you will see a moving GStreamer logo on a constantly changing background - something like this. That background is entirely separate from the video and is generated using CSS.



The default signaller for webrtcsink can now produce an answer when the consumer sends the offer first.

To test this with the example, you can simply follow the usual steps but also paste the following text in the text area before clicking on the producer name:

{
  "offerToReceiveAudio": 1,
  "offerToReceiveVideo": 1
}

I implemented this in order to test multiopus support with webrtcsink, as it seems to work better when munging the SDP offered by chrome.



A couple of weeks ago I implemented support for static HDR10 metadata in the decklinkvideosink and decklinkvideosrc elements for Blackmagic video capture and playout devices. The culmination of this work is available from MR 7124 - decklink: add support for HDR output and input

This adds support for both PQ and HLG HDR alongside some improvements in colorimetry negotiation. Static HDR metadata in GStreamer is conveyed through caps.

The first part of this is the 'colorimetry' field in video/x-raw caps. decklinkvideosink and decklinkvideosrc now support the colorimetry values 'bt601', 'bt709', 'bt2020', 'bt2100-hlg', and 'bt2100-pq' for any resolution. Previously the colorimetry used was fixed based on the resolution of the video frames being sent or received. With some glue code, the colorimetry is now retrieved from the Decklink API and the Decklink API can ask us for the colorimetry of the submitted video frame. Arbitrary colorimetry support is not supported on all Decklink devices and we fallback to the previous fixed list based on frame resolution when not supported.

Support for HDR metadata is a separate feature flag in the Decklink API and may or may not be present independent of Decklink's arbitrary colour space support. If the Decklink device does not support HDR metadata, then the colorimetry values 'bt2100-hlg', and 'bt2100-pq' are not supported.

In the case of HLG, all that is necessary is to provide information that the HLG gamma transfer function is being used. Nothing else is required.

In the case of PQ HDR, in addition to providing Decklink with the correct gamma transfer function, Decklink also needs some other metadata conveyed in the caps in the form of the 'mastering-display-info' and 'light-content-level' fields. With some support from GstVideoMasteringDisplayInfo, and GstVideoContentLightLevel the relevant information signalled to Decklink and can be retrieved from each individual video frame.