YSTV 3D: Live: Difference between revisions
(Created page with "The spiritual successor to YSTV 3D. Developed in Summer 2014, the the aim was to do a full live broadcast... in three dimensions. ==The Idea== During a late-night IRC dis...") |
No edit summary |
||
Line 1: | Line 1: | ||
[[File:YSTV_3D.png|thumb|right]] | |||
The spiritual successor to [[YSTV 3D]]. Developed in Summer 2014, the the aim was to do a full live broadcast... in three dimensions. | The spiritual successor to [[YSTV 3D]]. Developed in Summer 2014, the the aim was to do a full live broadcast... in three dimensions. | ||
Revision as of 16:32, 8 May 2017
The spiritual successor to YSTV 3D. Developed in Summer 2014, the the aim was to do a full live broadcast... in three dimensions.
The Idea
During a late-night IRC discussion, the idea of applying to host FreshersTV 2014 was suggested. After all, we'd totally learnt our lesson from two years previous, and most of the people involved with that had graduated.
The bidding guidelines that NaSTA had kindly provided advised the following:
- How will you be broadcasting? What impact will your technical requirements have on other stations participating? (Ridiculous example: If you’re broadcasting in 3D how are stations broadcasting in 2D going to be handled?) And what, if any, technical achievements will the broadcast merit for your station?
Naturally, someone suggested we do the ridiculous.
The Plan
An hour or two later, the techies had figured out most of a plan. The crux of the problem was how to vision mix in 3D using a single 2D mixer. As it turned out, the answer was fairly simple.
The left-hand image would be vision mixed using the normal program bank. The right-hand image would be vision mixed using a downstream keyer. By putting the downstream keyer live, we could output both channels over SDI using clean and dirty program feeds. The advantage to this was that the two downstream keyers available could be tied to follow program and preview banks, allowing the full suite of transitions to be used to mix between shots on both the left and right channels.
Using the ATEM API, a small applet was made to provide frame-accurate switching of the downstream keyers. The applet would intercept a user's keystroke, and select the appropriate cameras for the left channel (on the normal program bank) and the right channel (on the downstream keyer).
For VTs, we planned to pre-encode them in anaglyphic 3D, and then play them out on both channels. By doing this, no further anaglyphic effects would be applied when the channels were combined further down the broadcast chain. Similarly, traditional 2D content from other stations would be output on both channels to negate this effect.
The streaming PC would be set up with a Decklink Duo. The two input channels would be first passed through Caspar to apply lower thirds and other graphics in 3D. Then, they would be combined to form an anaglyphic 3D image and streamed using ffmpeg. Anaglyphic was chosen as it was the easiest to work with on a variety of hardware and devices - it required no special monitors or additional client-side processing to view. Cardboard 3D glasses would be sent to each participating station in order for them to view the show in all three dimentions.
The (Lack of) Result
In the end, the idea never really took off. We submitted our proposal for FreshersTV, and even did our bid interview in 3D over Skype, but NaSTA didn't seem to take our idea seriously. The concept was shelved until another production could find a use for it. Unfortunately, this never happened.
A 3D ident rendered by Peter does exist somewhere, depicting a rotating 3D version of the classic cube logo.