Your guide to radio network maintenance

Whether it’s a one base-station system or a nationwide emergency setup, two-way networks are more robust than ever. However, that doesn’t mean maintenance can be ignored. Vaughan O’Grady discovers what can go wrong

When it comes to two-way radio networks there’s no shortage of useful maintenance advice, including authoritative documents on radio site engineering from ETSI and industry grouping the Federation of Communications Services (FCS).

FCS 1331 in particular offers comprehensive guidelines on all aspects of site installation and maintenance. Tim Cull, head of business radio for FCS, points out: “Any sensible potential supplier and sensible customer should sit down and take a realistic professional view on what is needed and what is not needed. For a small taxi company you don’t want the full power of FCS 1331 being piled on top; it could easily quadruple the price of the solution.”

So how much maintenance is really necessary for two-way networks? After all, as Samuel Hunt, director of two-way radio services provider Maxxwave, says: “A well-installed network does not have huge maintenance issues, and modern kit is reliable.”

Maintenance regularity
High standards of site installation don’t come cheap, but thanks to the frequency bands two- way radio networks operate in and the greater output of their user equipment, “site count goes down dramatically. And if you look at your investment – the quantity of equipment that you’ve got to maintain – it’s a lot less for land mobile than it would be for the equivalent cellular,” says Barend Gildenhuys, technical director at Simoco Group, which delivers critical communication solutions.

But Geoff Varrall, executive director of RTT, which provides RF consultancy support to operators and vendors, suggests “the fact that you’re covering a big area from one big site does make you quite vulnerable to hardware failure on that site. There’s less sites to visit but [unlike cellular] there’s no handover – so if the site goes down then you go down.”

Which is probably why many larger networks invest in redundancy. It isn’t cheap but if you’re running an emergency network or one with mission-critical needs then that extra investment is easy to justify. “When failures occur you would fall back on a standby controller or control would go to a second site, or you’ll activate a second IP link for your connectivity. You have to attend to the failure – but your network is by no means on its knees,” explains Gildenhuys.

“However, if you’ve got two base stations or two switches next to each other, and flooding has just taken out your equipment room it’s going to take out your main and your standby,” he adds. The same could go for lightning. “We do redundancy on geographically separated sites. Your main controller or your main switch is at one site and your alternative is at another site; you’re not tied to a certain location.”

Referring specifically to DMR Tier III networks, Phil Whomes, marketing manager at Sepura, says: “Another way to improve fault tolerance is to distribute the control functions across the system hierarchy, for example giving a base station site the ability to process local calls even if the backhaul connection to the central switch and management has failed.

“Backhaul is a critical component in a multi-site DMR system. Backhaul networks are often leased from specialist operators, and it is essential that the SLAs offered by the backhaul provider are appropriate for the user’s business needs,” he adds.

But even if you have redundancy you’re still going to need maintenance. “You’re welcome to take on as much of the maintenance as you feel you can adequately cover given your core competencies, but where your core competencies don’t reach enter into a maintenance agreement,” advises Gildenhuys.

What to check
What that maintenance agreement involves will depend not only on the needs of the network but also on its function and how much the owner is willing to pay. For instance, solution provider Zycomm would be mainly dealing with small, non-mission-critical systems. These may be very simple and have no infrastructure, or be slightly larger with one or more repeaters.

“Maintenance is offered at different levels but a typical contract would cover the infrastructure plus portable and mobile units. An engineer would attend the customer’s site and check the base station infrastructure using test equipment. The antenna and cabling would be visually checked as well as being

tested,” explains Paul Greenhough, technical training and project manager with Zycomm. “Maintenance contracts can also include a battery swap out, usually at 18 months when batteries may start to degrade and lose capacity.”

Preventative maintenance and regular inspections are also important. Gildenhuys says: “A lot of our maintenance activity is going to the site and checking that equipment performs like it did before. This means firing up your base stations, checking transmit power, receiver sensitivity, whether your deviation is still in spec, and frequency errors. We sweep antennas and measure return loss and so forth.” In addition, maintaining backup power (if your system is not on the grid) could involve diesel generator and battery checks.

Maintenance needs don’t necessarily change a lot as systems get larger. However, “if a DMR base station is installed on a busy city centre site shared with base stations from other users there might be a need to do site checks more regularly,” says Whomes. “The users of a shared site can change, and if another user’s system is installed alongside yours it could be beneficial to check that the additions haven’t increased the site noise floor.”

“With the prevalence of more shared sites, more equipment, and more frequency bands on the same site we typically now upgrade coaxial cable to double screened cable or even cable with a solid outer conductor to reduce interference and intermodulation,” Gildenhuys adds.

When it comes to larger – and specifically mission-critical – networks there will be “a need for remote monitoring and customer support, and first line maintenance with spares held close to the customer,” says Whomes. “Requests for integration with customers’ existing network operation centres or other monitoring systems are becoming increasingly more common, particularly for DMR Tier III solutions in business-critical or mission-critical applications such as transport or utilities.”

Network dangers
Weather is often cited as a threat to network integrity. “RF hardware still fails, power supplies fail. They hate heat. And they hate getting hot, then cold, then hot again – they don’t like heat cycling. Lightning too. You can get all these expensive circuit breakers that are supposed to be robust against lightning protection, but if you get a direct hit it’s amazing what gets taken out,” Varall says.

Simoco’s Gildenhuys also highlights “water ingress into cables or ultraviolet radiation causing cable insulation to come off, which then leads to corrosion”. Companies like his also deal with unpredictable physical damage such as “a vehicle that collides with and damages a distributed leaky feed inside a tunnel”. And of course there’s always good old human error.

And finally, don’t forget theft says Varrall. “If you’re relying on legacy copper [backhaul], which a lot of networks will do, a lot of it’s going to disappear.”

Modern maintenance
Two-way radio is transitioning from analogue to digital. Will that change maintenance requirements? Not necessarily, says Maxxwave’s Hunt. “All of the UK’s large key networks are still the older generation of equipment [either MPT1327 or TETRA] and have not transitioned to DMR. Some – such as ourselves – do not plan to transition to DMR because analogue is still perfectly valid.” (Maxxwave’s portfolio includes Ambitalk, the UK’s largest PAMR radio communications network).

However, he concedes that analogue needs twice as much spectrum, which can be a problem in UHF. That said, “modern analogue equipment is as maintenance-free as digital equipment, since it is ultimately a DSP [digital signal processing]-driven radio, so many of the old ‘analogue’ settings don’t drift anymore.

“What is more of a consideration is the age of equipment. Any network over 10-years-old starts to need more intensive maintenance because of its original design and because of the age of the equipment.”

These days you don’t necessarily need to be on site regularly, thanks to remote monitoring capabilities. “Any network must be fully monitored, with both environmental controls around the site (temperature, power, intruders, vandalism, etc.) and technical parameters,” Hunt says. “The only true way to monitor a transmitter site is with ‘over the air’ monitoring. This will reveal potential faults that can’t be metered, such as antennas being screened by tower cranes across the road or knocked by riggers on the mast.”

Monitoring of network loading is also important. It gives early warning of congestion if the network is getting more users. But whatever the reason, an alarm shouldn’t be ignored. Sending text messages to engineers’ phones, or the screens in a NOC can ensure a quick response.

Taking care of TETRA
TETRA often means very large mission-critical networks. Graeme Casey, services business manager – Europe & Africa with Motorola Solutions, describes the maintenance and support service requirements thus: “Physical inspections are important but networks are increasingly dependent on software. If you’re running a mission-critical infrastructure then that needs to have 24/7 access to experts; you can’t afford to wait until the next working day for help.”

It’s also important to consider the on-site or local spares holdings to cover unexpected hardware failure supported by swift repair or advance replacement of the spares stock. In addition, “manage change carefully. It won’t be possible to keep an environment completely static for the lifetime of the network. Be sure you understand the inter-dependencies of the network and the potential risks a change (planned or unplanned) may hold,” says Casey. As for preventative maintenance and regular inspections, he notes that: “a physical check is often the only thing that will identify the root cause of poor RF performance”. However, he adds, “the most efficient way of handling this is through proactive network monitoring and management. Ensuring you can track and act upon events being generated by the remote equipment is vital to prevent issues escalating into a potentially service-impacting outage. Appropriately-set thresholds for events really help here as too many events mean a flood of inappropriate data. Too few and you may miss a critical situation developing.

“Preventative maintenance on infrastructure should include checks on central and remote sites of the RF performance of repeaters, controllers, routers/software, switches, servers and power supplies,” he continues. “HVAC (heating, ventilation, and air conditioning) issues are often the cause of premature hardware failure as well as fluctuating power supplies, so UPS performance is also important. Control room performance is critical to overall user experience and effectiveness, so ensure peripherals such as microphones are well-serviced and regularly cleaned, as they get very dirty!”

As for ICT, “Appropriate management and maintenance should be performed on the central switch/control servers. Best practice ICT measures should be in place to control backups, storage, disk management and so on.

“The well-designed, modern TETRA network has redundancy built in, along with appropriate disaster recovery measures enabled.” However, he adds, the network is controlled by standard ICT infrastructures – servers, software switches, routers, firewalls – so effective cybersecurity measures such as physically and virtually secure hardware, antivirus, operating system and application security patching are vital. “Most important of all is having staff fully briefed on the risks, and clear operating procedures in place to manage the availability of the network in all situations,” he concludes.

Remote control
Can remote monitoring do even more? Simoco has integrated a number of low power sensors into its base stations to measure things like forward radio power, reflected power on the antennas, and power supply coming off the grid. “There’s certainly scope for the support infrastructure around that to take sensors if you want to monitor ambient temperature to make sure your air-conditioning works, for example, or you want to monitor your circuit breakers to see how your power supply is doing,” remarks Gildenhuys.

And remote monitoring will reduce the number of site visits, but, says Hunt, “it doesn’t substitute for at least annual inspections that will find things like mice infestations on site, rusting critical mast sections due to manufacturing defects, and so on.”

Then there’s remote maintenance – of software in particular. Gildenhuys says: “Our DMR infrastructure is an all-IP soft switch, which basically means we’ve got an IP pipe down to every site that under normal circumstances is used to distribute the radio traffic, but if you’re in the maintenance cycle we push all our updates and all our firmware upgrades down that pipe... for our software distribution we don’t even visit the site anymore.”

“We may already have reached a situation where the reliability of a radio system is determined more by the software it uses than the hardware itself. The quality of the software and the way it is supported are extremely important,” adds Sepura’s Whomes.

Software or hardware?
He points out that “the increasing dependence on complex software also raises some interesting challenges when trying to determine the predicted availability of a system design, as the old calculation methods- based MTBF [mean time between failures] can’t be applied to software.

“The general hardware is more reliable than before and more dependent on the software, so I don’t think it is as necessary to conduct preventative maintenance schedules on sites as in the past, but to monitor with off-air tools.”

Gildenhuys notes: “LTE is going to bring a lot more of a software focus. Your applications are going to be hosted on some sort of platform. Those platforms evolve and get patches and security upgrades. When you have your applications running on top of that operating system there’s going to be a level of maintenance activity to confirm that your applications grow with the platform that’s hosting it.” By extension, hardware – notably two-way user devices – may need replacing more often as applications require better performance or greater speed.

Much of this can be sidestepped by buying into an established network. But if you wish to own your own, most problems – from rust, water ingress and lightning to software upgrades, interference reduction and power management – are manageable with careful planning. If downtime is totally unacceptable don’t forget to build in a lot of redundancy.

It helps if you start with a reputable installer, if you and your maintenance service are proactive, and if you decide what service level agreement is appropriate; a campus network will not need the uptime guarantees of an emergency network. Most of all, ensure that as far as possible reactive maintenance isn’t necessary. The FCS’s Cull puts it best: “The tip on maintenance is you don’t want to do much of it because it’s expensive. In a mission- critical environment it also implies downtime. You don’t want that either.”