|
Mobile operators are counting on Long Term Evolution (LTE) technology to handle surging demand for mobile data access. But LTE developers made some poor choices, cutting spectral efficiency and thus driving up operator costs.
LTE was envisioned as an all IP system, but the RF allocations follow the voice-centric approach of earlier generations. While LTE standards allow for either Frequency Division Duplexing (FDD) or Time Division Duplexing (TDD), all initial LTE equipment uses FDD. FDD requires two separate blocks of spectrum—one for each direction. FDD makes perfect sense for bi-directional voice traffic. It makes no sense for data. With the exception of peer-to-peer file sharing (which most mobile operators block), data traffic is very asymmetric. Sending data via FDD means one block of spectrum is fully utilized and the other, equal sized block, is dramatically under utilized. Result: the operator pays for almost twice the spectrum they actually use.
Verizon is deploying LTE in the 700 MHz C block which means they are using 746 MHz to 756 MHz (a 10 MHz channel) for their downlink (to the mobile device) and wasting most of 777 MHz to 787 MHz (another 10 MHz channel) for the uplink. If Verizon could deploy TDD (as used by WiMAX and as defined for LTE but not implemented), they could fully utilize both 10 MHz blocks for data transfers, almost doubling their data capacity.
I don’t know the actual capacity Verizon will realize on average with their first generation LTE infrastructure. But suppose Peter Rysavy is correct (as implied by Gigaom) that Verizon will initially average 15 Mbps per 10 MHz channel. That’s 15/15 Mbps, symmetric, even though average traffic is likely to be 15/2 Mbps. No single user is likely to see 15 Mbps; rather that 15 Mbps is shared among all users in that sector. With TDD (the default for WiMAX and an unimplemented option for LTE), the Verizon spectrum could support two channels of perhaps 13/2 Mbps each in that same sector. Again, no single user will see 13 Mbps, but all the users in the cell will be sharing 30 Mbps of capacity that can be dynamically divided between up and down—mostly like averaging 26/4 Mbps but able to allocate 15/15 or 28/2 as the traffic mix changes.
It’s ironic the LTE implementors got this wrong when you consider their decision to use only IP in the rest of the LTE design, thereby dropping support for traditional voice or SMS services. That’s right, initial LTE deployments won’t support voice telephony or SMS messages, only data services, and yet LTE spectrum assignments were made as if voice comes first.
That’s ironic.
Sponsored byRadix
Sponsored byVerisign
Sponsored byVerisign
Sponsored byWhoisXML API
Sponsored byIPv4.Global
Sponsored byCSC
Sponsored byDNIB.com
Brough, thanks for discussing this issue regarding LTE.
Can you explain your last paragraph in light of the One Voice effort? I had thought that carriers were going to enable voice over LTE using IMS. There was just such an announcement out of Mobile World Congress: http://bit.ly/aBwRxz
The LTE specs were in the works for several years culminating in final signoff in Dec 08. At that time, there was still no plan for voice or SMS on LTE. In 2009, several alternate approaches got underway and finally in Feb 2010 3GPP decided on One Voice. Now the One Voice specs need to be completed and, since One Voice is based on IMS, operators have to agree to deploy it. Remember IMS has been around for years and yet has no significant deployments.
At best, voice and SMS on LTE was an afterthought, but one that can be fixed in a couple of years. At worst, voice and SMS deployment over LTE will take much longer.
Dear Mr. Turner,
I think it is not so evident that it’s a stupidity to apply FDD instead of TDD. There are several arguments in favor of FDD (points 1-3 according to the book BEYOND 3G, Martin Sauter, Wiley, 2009, page 77):
1. TDD requires base stations to be tightly synchronized with each other to prevent uplink transmissions of devices in one cell to interfere with downlink transmissions of neighboring cells. This is not the case for FDD.
2. For FDD no transmission pause is necessary to give devices the necessary time to switch from transmission to reception mode.
3. FDD transmission allows more sensitive receivers in mobile devices because there are less disturbances for DL reception by UL transmissions by the receivers, which benefits overall data rates.
4. If spectrum is shared between mobile radio and broadcasting (which is the case in my country, Germany), TDD requires a broader safety margin between the parts of the spectrum allocated to those applications (in Germany: 7 MHz for TDD, 1 MHZ for FDD).
We should also take into consideration that the asymmetry between UL and DL bandwith requirements is smaller than the asymmetry between the data requirements because UL transmissions are usually less efficient than DL transmissions due to the limited power and antenna restrictions of the small devices.
The crucial question is if the asymmetry between UL and DL data requirements is so strong that the advantage of TDD (that the system can be tuned to reflect the ration between UL and DL traffic) is not compensated by the disadvantages mentioned above. In order to give a solid answer to this question we need more empirical data.
Best regards,
Alexander Schertz
Alexander, Thanks for the comments.
I will concede that I used the “stupid” word to attract attention. :) On the other hand, I do believe that, had the 3GSM/LTE community had more understanding of data issues, they would have put more focus on TDD. (Note that the LTE definition includes both FDD and TDD; vendors have just chosen to focus on FDD).
Two comments: First, on synchronization. Useful LTE deployments should have 100+ Mbps backhaul per cell site today and certainly will have such backhaul links when LTE is widely deployed, say 5 years from now. With 100+ Mbps links, what is the problem about jointly optimizing TDD frame structures across adjacent cells? Whether you do this at LTE’s 10 ms frame rate or the 1 ms sub-frame rate, it’s certainly practical. Also, you don’t require joint optimization across a huge pool of cell sites. You require local optimization between adjacent cells for just a percentage of the traffic that represents those mobile devices near cell boundaries.
Second, it is the case that instantaneous data is highly asymmetric. See my article here: http://su.pr/1Vmge3 . The problem is most people measure traffic averages over long periods of time (hours or, for Internet Transit services, over five minute intervals). This is OK once you’ve aggregated hundreds of people (to smooth out individual statistics) but it doesn’t work at the sub-second intervals at which we schedule radio air time among the relatively small number of active users in a specific sector of a specific cell site. Unfortunately, sub-second traffic statistics are only understood by a limited group of experts, for example those designing router queuing mechanisms.
The LTE standards bodies did specify both FDD and TDD. Unfortunately as governments look for more spectrum and operators consider how to reuse what they already have, everyone is sticking to the FDD assumptions - assumptions that poorly match the actual instantaneous data demand.