Recently Ericsson, Qualcomm, and Verizon teamed with Federated Wireless in a demonstration of LTE Advanced Carrier Aggregation in the Citizens Broadband Radio Service (CBRS) spectrum.  The CBRS band is essentially shared spectrum consisting of 150 MHz of 3.5 GHz spectrum known as band 48.

Nicola Palmer, Wireless Chief Network Officer for Verizon said, “The use of CBRS spectrum greatly advances our work in emerging spectrum bands.  Verizon and our partners are leading the way in creating an ecosystem around the use of CBRS spectrum which will lead to greater capacity and speed for our customers.”

Carrier Aggregation is a key feature of LTE-Advanced and allows mobile network operators to combine a number of separate LTE carriers. This enables them to increase the peak user data rates by aggregating channels, thus improving speeds and overall capacity of their networks. It also theoretically allows them to use fragmented spectrum allocations and in principle, can be applied across carriers hosted by different cells on different frequency bands. The net result is faster speeds by essentially providing multiple redundant data pipes into your phone.

The CBRS angle is interesting, as this is essentially free spectrum which is very attractive to carriers. Using this as part of the carrier spectrum would provide additional resources which, in combination with dedicated network bands, serve to add capacity improve reliability and deliver the all-important ultra-high speed service (needed for marketing spin as well as actual subscriber use).

However, it may be worth taking a practical look at the impact of said Carrier Aggregation, no matter the source of the signals. One key concern is on battery utilization and average life, so let’s look at the recent history and improvements in battery technology.

The most recent innovation is probably Lithium Ion Polymer (PL-Ion) battery technology which is lighter in weight and smaller in size for any given energy storage level than previous technologies.  Overall long-term battery capacity has increased 5-8% per year but that has been largely driven by the introduction of new battery technologies such as PL-Ion, rather than continuous improvement in existing technologies.

Typically once introduced and mature, each new technology node moves along at a much slower linear pace. We are in what would appear to be the mature and linear stage of a Li-Ion Polymer technology, so 5% ongoing improvement may be quite optimistic.

There are certainly new battery technologies being considered but none are likely to provide an immediate uptick in performance so we likely are stuck with small incremental improvements in existing technologies based on PL-Ion.

On the consumption side, drivers for power in phones are various including the display, radio and the processor and it is not hard to deduce that with data increasing 50% per year and video playing a major part in that figure, both display and CPU loads will increase.

Making matters worse, the means for delivering all that data is through multiple radio channels (i.e. Carrier Aggregation) which in turn implies support for multiple radio systems, each with their own power-hungry RF chains. The other approach is to deliver more bits of data over each channel, which places strict linearity demands on the components and generally means they are less efficient i.e. more power consumptive anyway. Besides, we are pushing the limits of how many bits/sec/hertz can be delivered over a given channel and without major changes to component assumptions, it is hard to imagine that can be sustainable. Adding new channels and using state of the art technology makes more sense and is both scalable and sustainable but it does have a significant impact on power.  That 5% per year battery capacity increase is starting to sound like a serious limitation.

Now, the phone and chip industry has a long history of dealing with similar challenges and technology continues to improve despite frequent predictions of impending doom. Latest generation processors use 14nm technology with a path to 10nm and with each chip technology node, power efficiency always improves. But 10nm is only approx 50 silicon atoms thick and it doesn’t take a mathematician to figure out where that path eventually ends. Similarly, advanced radio technologies often demand higher performance analog RF chips and are often less efficient, at least until optimized over time.

The overall effect is that the proposed multichannel, multi-carrier approach to boosting data speeds coupled with the enhanced use models that result from that (meaning people watching videos at traffic lights and annoying the likes of me) is likely set to play merry hell with your battery life.

One saving grace may be the emergence of small cells. Dense cellular systems mean the cells are closer to the user and do not require as much power to reach them, so average power consumptions decline. There are many benefits from small cell deployments but one wonders if optimizing for average phone power consumption will ever be a consideration.

Wireless Charging is another solution where the idea is to continuously recharge the phone when not in use. WiTricity in Boston is a pioneer in this field and has developed such a charging system. Qualcomm and others are developing a competing approach and both are working to standardize them. Wireless charging will clearly help with long-term power management, but it doesn’t help greatly for high usage factors which are the whole point of all this carrier aggregation supported high-speed data and video.

In any event, the transition from 4G to 5G is unlikely to be without its problems and battery power limitations are a hard reality that needs to be considered. I am sure the industry will rise to the challenge with new and innovative solutions, but I expect to hear the word ‘optimization’, meaning working with what you have used a lot in the meantime. I hope they figure it all out.

With reference to the great Marty Cooper, we do not want the battery size to go back to the bad old days.