WHEAT:NEWS RADIO
January 2018
Vol 9, No1
Got feedback or questions? Click my name below to send us an e-mail. You can also use the links at the top or bottom of the page to follow us on popular social networking sites and the tabs will take you to our most often visited pages.
Photos: Above, Hubbard Phoenix, design and installation by RadioDNA; right, Rob Goldberg, Owner RadioDNA.
Studio projects can go wrong in spectacular ways. More often though, broadcasters tell us that the biggest issue they face is a persistent, nagging feeling that something is missing.
Sometimes, it’s their keys or phone that have gone missing yet again in the chaos of a new studio buildout. But often, what’s missing are a few critical studio additions that were not even on the table five years ago.
RadioDNA’s Rob Goldberg walks us through four studio trends that every broadcaster building or adding onto a studio should be thinking about today. Goldberg and his company of engineers have designed, installed and managed studio projects for Entercom, Gabriel Media, Hubbard, Ingstad Media Family, Jackson Rancheria Radio, Leighton Broadcasting, Minnesota Wild Network and Results Broadcasting.
Light up the studio. Any modern studio today is likely to have IP audio networking and that means talent no longer monitor off-air. What they’re hearing in their headphones is a local mix, so visual feedback becomes more important than ever, according to Goldberg. For visual monitoring and other purposes, he sets up the GPIO in our WheatNet-IP audio network BLADEs to control Yellowtec’s Litt LED signaling trees. When the tree is lit red, it means the studio is hot and on the air. When the light is off, the console is in voice track mode. Green status and a slow blink might indicate that there are 30 seconds left of a song and to prepare for live broadcast.
LED status indicators let talent know there’s a caller on the line, whether an EAS alert is about to happen, and if any station in the group is off-air for any reason. In the case of the Minnesota Wild sports studio, Goldberg set up LED indicators so that if remote talent dialed in on the ISDN line from the field, the producer would know immediately in the studio. Any standard automation system can talk to the WheatNet-IP audio network BLADEs, which control LED trees and indicators as an integrated system. Doors, clocks, even the coffee maker can be monitored with status indicators. “When the coffee’s ready, just have it light up a light,” says Goldberg.
Make it visual. With 5G unlimited wireless data just over the horizon, video content is about to happen in a big way. Goldberg suggests stations get in now by installing video cameras in the studio and slaving those cameras to microphones for their social media feeds, for example. “The easiest way to integrate that into the studio is through WheatNet-IP because you can literally ‘ACI control’ the cameras,” says Goldberg, referring to Wheatstone’s Automation Control Interface that is used to integrate standard camera automation applications into the network.
With this, you can automatically control camera switching based on whether a mic is on, the mic fader is up, and audio from the mic is coming across as meter data. Cameras can be slaved to mics based on all or just one or two of the criteria, and they can be panned to capture live video of a panel of guests when two or more mics are opened up.
Automating camera operation through the IP audio network relieves producers of yet one more thing they have to deal with during a busy show, and can be beneficial for editing packages after the show. “It all looks professionally done, but it’s automated,” explains Goldberg.
In much the same way, graphics or sponsorship logos can be made to appear on the station website or Facebook feed during, say, a call-in show or a spot.
Virtualize it. Goldberg is laying out virtual mixers set in touchscreens using our WheatNet-IP ScreenBuilder app and setting these up as producer positions in newsrooms. “In some cases, we don’t install mixers in the newsroom at all. We are in the process of setting up a bank of virtual mixers with 35 news positions at a Hubbard facility,” explains Goldberg. The benefits: reduced hardware expense and maintenance, an easier way to make changes as needed, and the ability to add mixing and other talent controls in smaller spaces.
Keep it simple. Today’s modern studio is all about simplifying things too. What’s missing in most of the studios Goldberg designs these days are clunky box speakers in favor of speakers recessed into the ceiling, automation PCs in deference to virtualized server systems in the TOC, and extra cables for internet connectivity thanks to WiFi connectivity.
Overall, at the end of the project, you should end up with less that does more.
The Making of a Great Studio
So begins the transformation of Entercom Phoenix, formerly CBS. This is the production studio for Cactus radio, which will serve as the temporary studio as KOOL-FM, KMLE-FM and KALV-FM studios get a few upgrades.
Entercom will replace decades’ worth and layer upon layer of wiring and cabling to eventually make way for IP audio connectivity. When asked by an online observer if he could state the cost, systems integrator Chris Fonte from RadioDNA replied, “Cheaper than analog!”
Interesting Links
Download Our Free E-Book: Advancing AoIP for Broadcast
Putting together a new studio? Updating an existing studio?
We've put together this e-book with fresh info and some of the articles that we've authored for our website, white papers, and news that dives into some of the cool stuff you can do with a modern AoIP network like Wheatstone's WheatNet-IP.
And it's FREE to download!
Redundancy at ATL Airport. Close Doesn’t Count.
Can your facility continue to operate even if a critical component fails? The parties responsible for the power systems at Hartsfield-Jackson Atlanta International Airport probably thought they had every scenario covered and that no single-point failure would take them out of business. Close, but no cigar. That became clear just before Christmas when a sudden power outage brought the busy airport to a standstill, grounding more than 1,000 flights and stranding thousands of passengers. There are some important lessons broadcasters can take away from the events that unfolded at ATL last month.
On the afternoon of December 17, 2017, for reasons yet unknown, a piece of high-voltage switchgear belonging to public utility Georgia Power failed in a spectacular way. At least it would have been spectacular had there been anyone there to see it. The equipment, located in an underground vault, developed an arc and caught fire. Power throughout the airport’s terminals, gates, and transportation facilities progressively failed as the fault spread.
Ordinarily, a power outage at the airport is no big deal. The FAA control tower, navigational aids, and certain other systems have their own generators or backup power. And the airport itself maintains very large diesel generator plants, designed to automatically pick up the load should there be a failure of the off-site utility power. Having spent a lot of time around that airport, I’ve seen these plants and remarked to myself on the apparent safety they represented.
But as passengers waited in the terminals, nothing seemed to happen. The lights didn’t come back on, but the PA system did remain operational. Unfortunately, it offered no information other than the same recorded, generic messages over and over. Airport and airline employees, like the passengers who sought them out, seemed to be in the dark.
Why weren’t the diesels, designed for just such a contingency, picking up the load and getting the airport back in business? The reason is simple, but maddening. When the airport was originally laid out, power entered the central complex via high voltage feeders running through an underground tunnel. And later, when the large auxiliary power plants were built, the lines connecting them to the complex were also routed through that same tunnel. And both sets of lines ran adjacent to and connected with the vault containing Georgia Power’s switchgear. The lines were within a few feet of each other and within a scant dozen feet from the failed equipment, in a very confined space.
Darkness falls across the area shortly after 5PM this time of year, and as even the sunlight from the windows faded, passengers found themselves wandering through the terminals in pitch darkness as the battery-powered emergency lights ran down. Escalators, which normally just become stairs when the power fails, were closed off by airport police for fear of people falling in the darkness. The airport’s dedicated underground train, which transports passengers between terminals, simply stopped and had to be evacuated. People upstairs were stuck there.
The fire and arc blast from one failed piece of equipment had damaged the primary feed lines AND the lines from the redundant power system. One big bang disabled everything. Of course, emergency crews responding to the situation had their own problems to contend with. Airport fire crews knew that before even thinking about restoring power, the fire had to be extinguished. This proved difficult, as the fire filled the entire tunnel with thick, toxic fumes and smoke. Some of it even seeped into parts of Terminal D, frightening passengers and aggravating the conditions of passengers with breathing difficulties. Firefighters battled through the tunnel wearing respirators and within hours, had managed to put out the flames. Until they were sure things were cooled down and free of toxic gases, Georgia Power crews could not even access the failed equipment or begin to assess the damage.
The airport, known to the airlines and FAA as ATL, is the busiest airport in the world. As the headquarters and largest hub for Delta airlines as well as a hub for several other carriers, it’s a key link in the country’s air travel industry. As an old saying goes, when you die, whether you’re going to heaven or hell, you’ll have to change planes in Atlanta. With the airport’s passenger-handling facilities literally dead and dark, there was little choice but to sever that link. The airport closed to traffic. Outgoing flights were put on “ground stop” by the FAA, meaning they would not be allowed to take off. Neither could they easily return to a gate. So they parked and sat there, full of irate, sweaty, hungry, thirsty passengers. Incoming flights suffered the same fate if they were not diverted to other airports in time. Soon, the usual steady roar of turbofan engines became an ominous silence as the runways handled no planes and the ramps and taxiways became crowded with parked aircraft.
This wasn’t just any Sunday afternoon. At the peak of holiday travel season, the airport was absolutely packed with passengers. And not just typical passengers; according to one airline official, the holidays mean that the very young as well as the very old are among the traveling public in much larger numbers than usual. And with no air handling systems and precious little food or drink available, these folks suffered more than their share of inconvenience, if not downright danger, particularly in areas where smoke was a problem.
A full investigation will take some time, but I think we can now see the lesson in this: in order for a system to be truly redundant, all of the parts of it that could suffer any credible failure should be both functionally and physically separate. The airport had multiple diesel backup generator plants, two power feeds, two sets of switchgear, and two control systems. But somehow, either in the original design or during some subsequent upgrade, nobody looked at the one-line diagram and said, “What happens if this piece faults in a way that makes it blow up and catch fire?” That fire caused what’s called a single-point failure — where one fault in one place takes out the whole shooting match.
So as you periodically review and test your station or facility’s engineered redundancy (you DO do that, right?) remember to take this sort of unexpected interaction into account. Because in this case, someone missed a big one, and close only counts in the game of horseshoes.
Scott Johnson is the systems engineer and webmaster for Wheatstone.
The Accidental Multipath Tamer
By Jeff Keith, CPBE, NCE Wheatstone Senior Product Development Engineer
Some of the greatest inventions were created by accident. Penicillin was discovered because some scientist forgot to clean up his workstation one night and returned the next morning to find the first antibiotic growing in a dirty Petri dish.
Our multipath limiter also was created by accident. Before it became one of our most popular audio processing features for reducing the adverse listening effects of multipath, it was actually designed to even out mono loudness.
The multipath controller algorithm began life at WMJI in Cleveland, OH in the 1990s, while I was the CE there. The station had great coverage as a grandfathered Class B (16kW at 1128 feet). It was an oldies station playing music spanning several decades, and we'd noticed some drastic differences in loudness on mono radios. I had designed a stereo enhancer in the 80's and I knew how increased stereo separation affected mono loudness, so building a processing device for WMJI's air chain to even out mono loudness (the opposite of stereo enhancement) was a trivial design task. I even gave it a name: the MCC-1, or Mono Compatibility Controller.
I placed the unit in the air chain, and immediately station staff, especially those in the sales department who drove the market all day, began to ask me what I'd done to make the station 'less scratchy.' I drove the same route to and from work every day, and I, too, thought I'd noticed what seemed like reduced multipath in areas where it typically occurred.
Not able to think of anything else I'd done to the air chain or transmission system, I put the MCC-1 in and out of bypass over the next couple of days. When it was in bypass, I noticed the multipath was back. When it wasn’t in bypass, multipath was reduced. This effect was not what I'd designed the MCC-1 for, and its apparent effect on multipath was a complete surprise. Repeated experiments over the next six months revealed a direct and repeatable correlation between the MCC-1 in use and the reduction in multipath.
Fast forward to today. Decades later and in countless markets where our FM processors and the Multipath Limiter are in use, we're seeing the same kind of correlations I saw at WMJI.
One of the reasons why is because many stereo receivers aggressively blend to mono during multipath, creating large fluctuations in volume as the stereo sound field collapses. The wider the stereo image, the more obvious the blending, and that's why stereo enhancement earned the reputation of creating multipath. But stereo enhancement doesn't really create multipath. It just makes it seem worse because blending then has to squash down a much bigger stereo sound field.
What our Multipath Limiter does is reduce the magnitude of volume fluctuations due to multipath-induced blending by managing a program content's L+R/L-R ratio under very specifically controlled conditions. By intelligently allowing only enough stereo information to fool the ear into believing it's a full stereo signal, the audibility of blending is reduced. The psychoacoustic result, then, is that multipath has been reduced and perceived coverage improved, cautiously noting that perceived stereo coverage is only loosely related to the station's RF field strength.
We also discovered, also quite by accident during field research of another development project (FM/HD diversity delay correction), that newer DSP based receivers seem to be more prone to multipath effects even though they have mechanisms to supposedly minimize it. These radios lack the modulation headroom of older technology, and definitely don't behave well with non-standard stereo multiplex signals (like single sideband) or a dirty MPX spectrum (think composite clipping). In fact, the latter is precisely why our FM processors have both a composite clipper and a look-ahead MPX limiter; the user can choose which algorithm works best for their scenario. The bottom line is this: clean up the MPX spectrum and intelligently manage the L+R/L-R ratio and prepare to be surprised at how much perceived stereo coverage (not signal strength!) goes up.
All of this, and more, is detailed in my NAB white paper, which you can download here.
Your IP Question Answered
Q: I see that your (WheatNet-IP audio network) system doesn’t provide for specifying low latency/high latency streams. Why is that?
A: A great question! All AoIP systems, regardless of manufacturer, have to deal with packet overhead. Because we are all using standard protocols, there is extra data that must accompany each "packet" of audio, allowing it to adhere to these standards. This is addressing, protocol, and timing information that all network switches depend on to route the packets to the right places. Since a standard IP packet can hold up to 1500 bytes of data, to stream audio on the network efficiently we all bundle or group a number of audio samples together in each packet, thus minimizing the percentage of data used for overhead.
You can see that the more audio samples you place into each packet, the less packets you have to send and the lower the overhead becomes. In some systems, this is done because of limited bandwidth and processor resources, and the result is that those streams can have up to 100ms latency. Why does latency go up with larger packets? It's simple: it takes more time to assemble them because the audio data is being created at a set sample rate. You have to wait for enough audio samples to fill the packet. That wait means latency goes up.
The WheatNet-IP audio network does it differently. With gigabit links we have enough bandwidth to make every packet smaller and low latency. You're not forced into tradeoff choices, and there’s no need to spend any effort specifying packet depth per stream.
Wheatstone's Scott Johnson puts it like this:
Say you have 100 pages to mail. You could put each one in its own envelope, but because the envelope is heavier than one page, you’ve more than doubled the weight. However, you can send each page as quickly as it comes off the printer.
Now if you put the 100 pages together in one big envelope you’ll have less overhead — one envelope vs. 100, your package weighs only slightly more than the pages themselves. But if you can only mail stuff in big envelopes 100 pages at a time, you’ve got to wait for 100 pages to be printed before you can seal and send the envelope.
Interview with Rob Goldberg @ Wheatstone Booth, NAB 2017
At NAB2017, Scott Fybush talks with Rob Goldberg from RadioDNA about how he's making the future of radio happen today and every day.